CN112221137B - Audio processing method and device, electronic equipment and storage medium - Google Patents

Audio processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112221137B
CN112221137B CN202011156439.5A CN202011156439A CN112221137B CN 112221137 B CN112221137 B CN 112221137B CN 202011156439 A CN202011156439 A CN 202011156439A CN 112221137 B CN112221137 B CN 112221137B
Authority
CN
China
Prior art keywords
audio
volume
virtual
virtual object
playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011156439.5A
Other languages
Chinese (zh)
Other versions
CN112221137A (en
Inventor
田牧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011156439.5A priority Critical patent/CN112221137B/en
Publication of CN112221137A publication Critical patent/CN112221137A/en
Application granted granted Critical
Publication of CN112221137B publication Critical patent/CN112221137B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an audio processing method and device, electronic equipment and a storage medium, and belongs to the technical field of audio and video. According to the method and the device, the first audio serving as the background sound effect is played in the outdoor virtual environment, once the first virtual object initiates the shooting operation, the volume of the first audio is adaptively reduced by the machine in the process of playing the emission sound effect, so that a user can focus on listening to the second audio, a play strategy of dynamically and intelligently adjusting the volume ratio of the first audio and the second audio along with the behavior change of the first virtual object is achieved, main sound information can be highlighted, secondary sound information can be weakened in the real-time game process, and the audio playing effect is optimized.

Description

Audio processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of audio and video technologies, and in particular, to an audio processing method and apparatus, an electronic device, and a storage medium.
Background
With the development of computer technology and the diversification of terminal functions, more and more games can be played on the terminal. The shooting game is a more popular game, the terminal displays a virtual environment in an interface and displays a virtual object in the virtual environment, and a user controls the virtual object to fight against other virtual objects through the terminal.
In the current shooting game, for an outdoor virtual environment (such as a city, a forest, etc.), when a virtual object is in a battle in an outdoor scene, multiple sound elements may appear at the same time, such as a gunshot, an environmental sound, an explosive sound, a voice, etc., and a terminal generally provides a user with a volume setting function of background sound and voice, and mixes the multiple sound elements at a fixed volume ratio based on the user's setting (or default setting). The audio playing effect is poor just because the volume ratio of each sound element in the audio mixing is fixed.
Disclosure of Invention
The embodiment of the application provides an audio processing method, an audio processing device, electronic equipment and a storage medium, which can optimize an audio playing effect. The technical scheme is as follows:
in one aspect, an audio processing method is provided, and the method includes:
responding to a first virtual object located in an outdoor virtual environment, and playing a first audio, wherein the first audio is a background sound effect of the outdoor virtual environment;
responding to the first virtual object to emit a virtual item in the outdoor virtual environment, and playing a second audio, wherein the second audio is an emitting sound effect of the virtual item;
and when the second audio is played, reducing the volume of the first audio.
In one aspect, an audio processing apparatus is provided, the apparatus comprising:
the playing module is used for responding to the situation that a first virtual object is located in an outdoor virtual environment and playing a first audio, wherein the first audio is a background sound effect of the outdoor virtual environment;
the playing module is further configured to respond to the first virtual object to emit a virtual item in the outdoor virtual environment, and play a second audio, where the second audio is an emission sound effect of the virtual item;
and the reducing module is used for reducing the volume of the first audio when the second audio is played.
In one possible embodiment, the reducing module comprises:
a first reducing unit configured to reduce a volume of the first audio based on an automatic dodging function; or the like, or, alternatively,
a second reducing unit configured to reduce the volume of the first audio based on a side chain effector.
In a possible embodiment, the second reduction unit is configured to:
in response to the side-chain effector detecting the level signal of the second audio, modulating the volume of the first audio based on the side-chain effector, wherein the volume of the modulated first audio is less than the volume of the first audio before modulation.
In one possible embodiment, the apparatus further comprises:
a creation module for creating a control parameter associated with the output level value;
the mounting module is used for mounting the side chain effector on the second audio frequency;
and the association module is used for associating the control parameter with the volume of the modulated object of the side chain effector.
In one possible embodiment, the first reduction unit is configured to:
in response to detecting a play event of the second audio based on an auto-evasion function, the volume of the first audio is reduced according to an S-shaped audio control curve.
In one possible embodiment, the reduction module is configured to:
and within a first target time length after the playing starting time of the second audio, reducing the volume of the first audio by a first decibel, wherein the first target time length is less than or equal to the playing time length of the second audio.
In one possible embodiment, the apparatus further comprises:
and the lifting module is used for lifting the volume of the first audio by the first decibel within a second target time length after the playing ending moment of the second audio.
In one possible implementation, the playing module is further configured to: responding to the fact that the virtual prop hits a second virtual object, and playing a third audio, wherein the third audio is a hit prompt sound effect of the virtual prop;
the reduction module is further configured to: when the third audio is played, the action volume of the first virtual object and the volume of the second audio are reduced;
the lifting module is further configured to lift the action volume and the volume of the second audio after the third video is played.
In one possible implementation, the playing module is further configured to: responding to a second virtual object in the outdoor virtual environment to send out an action, and playing a fourth audio, wherein the fourth audio comprises at least one of an action sound effect or an environment interaction sound effect of the second virtual object;
the reduction module is further configured to: decreasing the volume of the fourth audio in response to the first virtual object producing a displacement in the outdoor virtual environment;
the lifting module is further configured to: and in response to the completion of the displacement of the first virtual object, increasing the volume of the fourth audio.
In one aspect, an electronic device is provided, which includes one or more processors and one or more memories, in which at least one computer program is stored, the at least one computer program being loaded and executed by the one or more processors to implement an audio processing method according to any one of the possible implementations described above.
In one aspect, a storage medium is provided, in which at least one computer program is stored, the at least one computer program being loaded and executed by a processor to implement the audio processing method according to any one of the possible implementations described above.
In one aspect, a computer program product or computer program is provided that includes one or more program codes stored in a computer readable storage medium. The one or more program codes can be read from a computer-readable storage medium by one or more processors of the electronic device, and the one or more processors execute the one or more program codes, so that the electronic device can execute the audio processing method of any one of the above possible embodiments.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
by playing the first audio serving as the background sound effect in the outdoor virtual environment, once the first virtual object initiates a shooting operation, the volume of the first audio is adaptively reduced by the machine in the process of playing the emission sound effect, so that a user can focus on listening to the second audio, a playing strategy of dynamically and intelligently adjusting the volume ratio of the first audio to the second audio along with the behavior change of the first virtual object is achieved, main sound information can be highlighted, secondary sound information can be weakened in the process of real-time game, and the audio playing effect is optimized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to be able to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment of an audio processing method according to an embodiment of the present application;
fig. 2 is a flowchart of an audio processing method provided in an embodiment of the present application;
fig. 3 is a flowchart of an audio processing method provided in an embodiment of the present application;
FIG. 4 is an interface diagram of an outdoor virtual environment provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a setup interface of an auto-evasion function according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a first audio according to an embodiment of the present disclosure;
fig. 7 is a schematic flowchart of an audio processing method provided in an embodiment of the present application;
fig. 8 is a flowchart of an audio processing method provided in an embodiment of the present application;
FIG. 9 is an interface diagram of an outdoor virtual environment provided by an embodiment of the present application;
FIG. 10 is a schematic diagram of a setup interface of an auto-dodging function according to an embodiment of the present application;
fig. 11 is a schematic flowchart of an audio processing method provided in an embodiment of the present application;
fig. 12 is a flowchart of an audio processing method provided in an embodiment of the present application;
FIG. 13 is a schematic diagram of a setup interface of an auto-dodging function according to an embodiment of the present application;
fig. 14 is a schematic flowchart of an audio processing method provided in an embodiment of the present application;
fig. 15 is a schematic structural diagram of an audio processing apparatus according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The terms "first," "second," and the like in this application are used for distinguishing between similar items and items that have substantially the same function or similar functionality, and it should be understood that "first," "second," and "nth" do not have any logical or temporal dependency or limitation on the number or order of execution.
The term "at least one" in this application means one or more, and the meaning of "a plurality" means two or more, for example, a plurality of first locations means two or more first locations.
Hereinafter, terms related to the present application are explained.
Virtual environment: is a virtual environment that is displayed (or provided) when an application is run on the terminal. The virtual environment may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment. The user may control the movement of the virtual object in the virtual environment. Optionally, the virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, or a three-dimensional virtual environment, and the dimension of the virtual environment is not limited in this embodiment of the application. In general, virtual environments may be divided into indoor virtual environments and outdoor virtual environments according to whether the virtual environment is located inside a building, for example, the outdoor virtual environment includes sky, land, forest, sea, and the like, and the land includes environmental elements such as a desert, a city, and the like.
Virtual object: refers to a movable object in a virtual environment. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in the virtual environment. The virtual object may be an avatar in the virtual environment that is virtual to represent the user. The virtual environment may include a plurality of virtual objects, each virtual object having its own shape and volume within the virtual environment, occupying a portion of the space within the virtual environment.
Alternatively, the virtual object may be a Player Character controlled by an operation on the client, or may also be a Non-Player Character (NPC) provided in the virtual environment interaction. Optionally, the virtual object is a virtual character playing a game in a virtual environment. Optionally, the number of virtual objects participating in interaction in the virtual environment may be preset, or may be dynamically determined according to the number of clients participating in interaction.
Multiplayer Online Battle sports (MOBA) game: the game is a game which provides a plurality of base points in a virtual environment, and users in different camps control virtual objects to fight in the virtual environment, take the base points or destroy enemy camp base points. For example, the MOBA game may divide the user into at least two enemy camps, and different virtual teams divided into the at least two enemy camps occupy respective map areas, respectively, to compete with one another with a winning condition as a target. Such winning conditions include, but are not limited to: the method comprises the following steps of occupying site points or destroying enemy battle site points, killing virtual objects of enemy battle, ensuring the survival of the enemy battle in a specified scene and time, seizing certain resources, and exceeding the interaction score of the other party in the specified time. For example, the MOBA game may divide the user into two enemy paradigms, and disperse the virtual objects controlled by the user in the virtual environment to compete with each other to destroy or dominate all the points of the enemy as winning conditions.
Optionally, each virtual team includes one or more virtual objects, such as 1, 2, 3, or 5 virtual objects, and the tactical competition can be divided into a 1V1 competitive puzzle, a 2V2 competitive puzzle, a 3V3 competitive puzzle, a 5V5 competitive puzzle, and the like according to the number of virtual objects in each team participating in the tactical competition, where 1V1 means "1 to 1", and details thereof are not described here. Alternatively, the MOBA game may be played in units of rounds (or rounds), and the map of each tactical competition may be the same or different. The duration of a play of the MOBA game is from the moment the game is started to the moment the winning condition is achieved.
Shooter Game (STG): the game is a game in which a virtual object uses hot weapon virtual props to carry out remote attack, and a shooting game is one of action games and has obvious action game characteristics. Alternatively, the Shooting-type game includes, but is not limited to, a First-Person Shooting game (FPS), a third-Person Shooting game, a top-view Shooting game, a head-up Shooting game, a platform Shooting game, a reel Shooting game, a light gun Shooting game, a keyboard-mouse Shooting game, a Shooting range game, a tactical Shooting game, and the like, and the embodiment of the present application is not particularly limited to the type of the Shooting-type game.
In a shooting game or an MOBA game, a user may control a virtual object to freely fall, glide, or open a parachute to fall in the sky of the virtual environment, to run, jump, crawl, bow to go ahead on land, or to swim, float, or dive in the sea, or the like, or naturally, a user may control a virtual object to move in the virtual environment by riding a virtual vehicle, such as a virtual car, a virtual aircraft, or a virtual yacht, which is only exemplified by the above-described scenes, but the present invention is not limited thereto. The user may also control the virtual object to interact with other virtual objects in a battle manner through the virtual prop, for example, the virtual prop may be a throwing virtual weapon such as a grenade, a beaming mine, a viscous grenade, a laser tripper, a shooting virtual weapon such as a machine gun, a pistol, a rifle, a sentry ring machine gun, or some calling virtual soldiers (e.g., mechanical corpses).
Hereinafter, a system architecture according to the present application will be described.
Fig. 1 is a schematic diagram of an implementation environment of an audio processing method according to an embodiment of the present application. Referring to fig. 1, the embodiment includes: the implementation environment includes: a first terminal 120, a server 140, and a second terminal 160.
The first terminal 120 is installed and operated with an application program supporting a virtual environment. Optionally, the application program includes any one of an FPS game, a third person shooter game, an MOBA game, a virtual reality application program, a three-dimensional map program, a military simulation program, or a multiplayer gunfight type live game. In some embodiments, the first terminal 120 is a terminal used by a first user who uses the first terminal 120 to operate a first virtual object located in a virtual environment for activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing. Illustratively, the first virtual object is a first virtual character, such as a simulated persona or an animated persona.
The first terminal 120 and the second terminal 160 are in direct or indirect communication connection with the server 140 through a wireless network or a wired network.
The server 140 includes at least one of a server, a plurality of servers, a cloud computing platform, or a virtualization center. The server 140 is used to provide background services for applications that support virtual environments. Alternatively, the server 140 undertakes primary computational work and the first and second terminals 120, 160 undertake secondary computational work; alternatively, the server 140 undertakes the secondary computing work and the first terminal 120 and the second terminal 160 undertakes the primary computing work; alternatively, the server 140, the first terminal 120, and the second terminal 160 perform cooperative computing by using a distributed computing architecture.
Optionally, the server 140 is an independent physical server, or a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, a cloud storage, a web service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
The second terminal 160 is installed and operated with an application program supporting a virtual environment. Optionally, the application program includes any one of an FPS game, a third person shooter game, an MOBA game, a virtual reality application program, a three-dimensional map program, a military simulation program, or a multiplayer gunfight type live game. In some embodiments, the second terminal 160 is a terminal used by a second user who uses the second terminal 160 to operate a second virtual object located in the virtual environment for activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing. Illustratively, the second virtual object is a second virtual character, such as a simulated persona or an animated persona.
Optionally, the first virtual object controlled by the first terminal 120 and the second virtual object controlled by the second terminal 160 are in the same virtual environment, and the first virtual object may interact with the second virtual object in the virtual environment.
The first virtual object and the second virtual object may be in an enemy relationship, for example, the first virtual object and the second virtual object may belong to different camps, and between the virtual objects in the enemy relationship, the virtual objects may interact in a battle manner on the land in a manner of shooting each other, for example, the virtual props are launched by each other. In other embodiments, the first virtual object and the second virtual object may be in a teammate relationship, for example, the first virtual character and the second virtual character may belong to the same camp, the same team, have a friend relationship, or have temporary communication rights.
Alternatively, the applications installed on the first terminal 120 and the second terminal 160 are the same, or the applications installed on the two terminals are the same type of application of different operating system platforms. The first terminal 120 may generally refer to one of a plurality of terminals, and the second terminal 160 may generally refer to one of a plurality of terminals, and this embodiment is only illustrated by the first terminal 120 and the second terminal 160. The device types of the first terminal 120 and the second terminal 160 are the same or different, and include: the mobile phone comprises at least one of a smart phone, a tablet computer, a smart speaker, a smart watch, an electronic book reader, an MP3(Moving Picture Experts Group Audio Layer III, motion Picture Experts compress Standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, motion Picture Experts compress Standard Audio Layer 4) player, a laptop and a desktop. For example, the first terminal 120 and the second terminal 160 may be smart phones, or other handheld portable gaming devices. The following embodiments are illustrated with the terminal comprising a smartphone.
Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, the number of the terminals may be only one, or several tens or hundreds of the terminals, or more. The number of terminals and the type of the device are not limited in the embodiments of the present application.
Based on the implementation environment, in the current shooting game at the mobile terminal, the relationship among sound elements such as gunshot, ambient sound, explosion sound, player voice and the like is processed at a constant volume ratio, optionally, volume adjustment options of system sound and player voice sound elements are provided for a user, and the user manually adjusts the volume ratio of the two sound elements.
On one hand, no matter the volume ratio set by the system or the volume ratio adjusted by the user, the volume ratio of each sound element in the shooting game can only be kept in a relatively fixed ratio, so that the listening feeling of the mixed sound is relatively single, the user can only integrally adjust the voice (human voice) of the player or the system sound (gunshot, environmental sound, explosion sound and the like), a certain type of system sound cannot be specifically adjusted in detail, the operation difficulty of manual adjustment of the user is high, the user is difficult to adjust various sound elements to a proper volume ratio, and the operability of audio adjustment is poor.
On the other hand, in the outdoor virtual environment, if there are many sound elements triggered at the same time, some sound elements with a small sound pressure level are easily covered by sound elements with a large sound pressure level, and some sound elements which need to be highlighted, need to draw the attention of the user, and can bring a refreshing feeling to the user cannot dynamically adjust the volume ratio, so that the listening experience of audio mixing is seriously damaged, and a poor audio playing effect is caused.
In view of this, the embodiment of the present application provides an audio processing method, and for multiple sound elements appearing at the same time, a machine can only select to reduce the volume of a certain type of sound elements, so that the sound elements that are not reduced can be highlighted, so that the mixed sound has a more stereoscopic and clear hierarchical sense, and can follow psychoacoustic logic, thereby creating an immersive game experience with a more substituted sense for a user from an audio level.
Fig. 2 is a flowchart of an audio processing method according to an embodiment of the present application. Referring to fig. 2, the embodiment is applied to an electronic device, and is described by taking the electronic device as an example, where the terminal includes the first terminal or the second terminal in the above implementation environment, and the embodiment includes the following contents:
201. and the terminal responds to the situation that the first virtual object is positioned in the outdoor virtual environment, and plays a first audio which is a background sound effect of the outdoor virtual environment.
Optionally, the terminal is an electronic device used by any user, an application program supporting a virtual environment and audio playing, such as a shooting game application, an MOBA game application, and the like, is installed on the terminal, and the user can start the application program on the terminal and log in the application program.
Optionally, the first virtual object refers to a virtual object currently manipulated by the user on the terminal, that is, the first virtual object refers to a controlled virtual object.
Alternatively, the outdoor virtual environment refers to a virtual environment located outside a building, for example, the outdoor virtual environment includes types of forest, sea, desert, city, etc., and different types of outdoor virtual environments generally have different background sound effects (colloquially called "ambient sound"), which are a type of sound elements in the outdoor virtual environment that are most stable and persistent, and exist in the game background throughout the entire flow of the game.
Optionally, the first audio refers to a background sound effect corresponding to an outdoor virtual environment where the first virtual object is currently located.
In some embodiments, the terminal starts the application program in response to a start operation of the application program by a user, where the start operation may be that the user performs a touch operation on an icon of the application program on a desktop of the terminal, or the user inputs a start instruction for the application program to the intelligent voice assistant, and the start instruction may include a voice instruction or a text instruction, and the type of the start instruction is not specifically limited in the embodiments of the present application.
In some embodiments, when the user sets an automatic start condition for the application program, the terminal automatically starts the application program by the terminal operating system when detecting that the automatic start condition of the application program is met, optionally, the automatic start condition is to periodically start the application program, for example, 8 o' clock starts the application program every night, or the automatic start condition is to start the application program automatically after starting up, and the embodiment of the present application does not specifically limit the automatic start condition of the application program.
In some embodiments, the terminal starts an application program, and displays an operation interface in the application program, where the operation interface includes a selection control for the office mode, a setting control for an account option, a selection control for a virtual environment, and the like. Optionally, the terminal detects, in real time, a selection operation of the user on each game mode in the operation interface, and determines the selected game mode as the game mode configured in the current game. Optionally, the selection operation may be a click operation, a long-time press operation, or the like, or may also be a trigger operation on a shortcut key corresponding to any pair of office modes, which is not limited in this embodiment of the application.
Optionally, after the user selects the office matching mode in the operation interface, the terminal detects, in real time, a selection operation of the user on each virtual environment in the operation interface, and determines the selected virtual environment as the virtual environment configured in the office, and optionally, the selection operation may be a click operation, a long-time press operation, or the like, or a trigger operation on a shortcut key corresponding to any virtual environment, which is not limited in this embodiment of the application.
After the user selects the virtual environment, the terminal starts the game, and loads the virtual environment selected by the user, it should be noted that if the user does not select the virtual environment, the terminal loads the virtual environment configured by default in the current game-matching mode. Optionally, the terminal obtains an environment type of the loaded virtual environment, obtains an environment identifier (Identification) of the outdoor virtual environment in response to that the environment type is the outdoor virtual environment, queries, using the environment ID as an index, a background sound effect stored corresponding to the environment ID from an audio library, obtains the queried background sound effect as a first audio, and invokes an audio playing control to play the first audio.
Optionally, the audio library may be a local database, which is equivalent to that the terminal reads the first audio from the local disk or the local cache, so that the time for acquiring the first audio can be shortened.
The above process shows a possible implementation manner of querying the first audio based on the environment ID when different outdoor virtual environments correspond to different background audio effects, and in another possible implementation manner, all the outdoor virtual environments correspond to the same background audio effect, so the terminal does not need to acquire the environment ID, and only needs to acquire the unique background audio effect stored in the audio library as the first audio, which is not specifically limited by the embodiment of the present application. Similarly, the audio library may be located locally or in the cloud, which is not described herein.
In some embodiments, when the terminal plays the first audio, the playing volume of the first audio is determined, and then the first audio is played based on the playing volume, optionally, the playing volume is a default value of a system or is manually set by a user, and the setting mode of the playing volume is not specifically limited in the embodiments of the present application.
It should be noted that, when configuring the playing volume of the first audio, the terminal should not be too large or too small, and when the playing volume is too large, the background sound effect may cover the action sound of the second virtual object, so as to affect the information acquisition efficiency of the first virtual object during combat, where the second virtual object refers to other virtual objects besides the first virtual object, and when the playing volume is too small, the user is difficult to notice the background sound effect, which may result in a thin scene sound of the outdoor virtual environment and lack of substitution feeling.
In some embodiments, the first audio includes at least one of a continuous Loop layer or a point sound source, where the Loop layer refers to a scene sound that exists all the time continuously, such as wind sound, city background sound, forest background sound, and the like, and generally needs to be played circularly, and the point sound source refers to a sound generated by a point sound source that triggers playing at certain time intervals in a virtual scene, such as cicada sound, bird sound, and the like.
202. And the terminal responds to the first virtual object to transmit the virtual prop in the outdoor virtual environment and plays a second audio, wherein the second audio is the transmitting sound effect of the virtual prop.
Optionally, the virtual prop includes a throwing virtual weapon such as a grenade, a mine tied in a bundle, a viscous grenade, a laser tripmine, or the virtual prop further includes a shooting virtual weapon such as a machine gun, a pistol, a rifle, a sentry ring machine gun, and the embodiment of the present application does not specifically limit the type of the virtual prop.
In some embodiments, the terminal displays a call control in the outdoor virtual environment, and when a user wants to call a virtual prop, the terminal performs a trigger operation on the call control, and then receives a trigger signal to the call control, triggers a creation instruction, and creates the virtual prop. The calling control is used for calling the virtual prop to enter the outdoor virtual environment, and the shape of the calling control can be a button which is displayed in the outdoor virtual environment in a floating mode.
Optionally, after the terminal completes creating the virtual item in the outdoor virtual environment, the virtual item is displayed on a target portion of the first virtual object to show that the first virtual object can control the virtual item, for example, the target portion includes at least one of a shoulder, a waist, or a back of the first virtual object.
Optionally, after the terminal displays the virtual prop, a shooting control of the virtual prop is displayed in the outdoor virtual environment, where the shooting control is used to trigger a first virtual object to launch the virtual prop, and in response to detecting a triggering operation of the shooting control by a user, the first virtual object is controlled to launch the virtual prop in the outdoor virtual environment, so that a launching process of the virtual prop is displayed in the outdoor virtual environment.
In some embodiments, during launching of the virtual item, in response to detecting a user-triggered operation of the firing control, a view mode of the outdoor virtual environment is adjusted from a panoramic view for omni-directionally observing behavior of a second virtual object in the outdoor virtual environment to a local view for more clearly observing a range of local scenes in the outdoor virtual environment. Under local field of vision, the user can adjust the accurate heart that virtual stage property aimed to in user control first virtual object carry out more accurate shooting operation. Optionally, when the view mode is adjusted, a focal length of a virtual Camera (Camera) mounted on the first virtual object is adjusted in the outdoor virtual environment, a local scene in a target range of the current center of gravity is enlarged to a target multiple, and the enlarged local scene is displayed.
Optionally, the triggering operation on the shooting control includes, but is not limited to, a combination of one or more of clicking, long-pressing, double-clicking, pressing, and the like, in one example, if the triggering operation on the shooting control is pressing, the virtual prop may be triggered to be launched in a loose-hand manner after the user is adjusted on the center in the local view, and in another example, if the triggering operation on the shooting control is clicking, the virtual prop may be triggered to be launched in a manner of clicking the shooting control again after the user is adjusted on the center in the local view.
In some embodiments, in response to the first virtual object transmitting the virtual item in the outdoor virtual environment, since the virtual item usually generates sound effects such as gunshot, explosion, and the like when being transmitted, the terminal acquires the item ID of the virtual item, uses the item ID as an index, queries the transmission sound effect stored corresponding to the item ID from the audio library, acquires the queried transmission sound effect as the second audio, and invokes the audio playing control to play the second audio. Optionally, the second audio comprises at least one of a gunshot, an explosion, or a gunpowder.
It should be noted that the manner of playing the second audio in step 202 is similar to the manner of playing the first audio in step 201, and is not described herein again.
203. And when the terminal plays the second audio, the volume of the first audio is reduced.
Optionally, the terminal decreases the volume of the first audio by a first decibel within a first target time length after the play start time of the second audio. The first target duration is any value greater than or equal to 0, and the first target duration is less than or equal to the playing duration of the second audio, for example, the first target duration is 1 second.
In the above process, after detecting the play event of the second audio, the terminal determines the play start time, and decreases the volume of the first audio by the first decibel within the first target duration after the play start time, that is, the terminal obtains the target volume obtained by subtracting the first decibel from the play volume of the first audio, and decreases the volume of the first audio from the play volume to the target volume. The first decibel is any value greater than 0, for example, the first decibel is 6 decibels.
In some embodiments, the terminal reduces the volume of the first audio based on an Auto-avoidance (Auto-closing) function. The auto-evasion function is an audio processing means for lowering the volume level of one audio signal in order to make the other audio signal in parallel more prominent. In one example, the terminal provides this auto-evasion function based on a Wwise audio engine.
Optionally, the terminal decreases the volume of the first audio according to an S-shaped audio control Curve (S-current) in response to detecting a play event of the second audio based on the auto-evasion function. In the process, after the terminal detects the playing event of the second audio, the volume of the first audio can be gradually reduced through the S-Curve, and the S-Curve is a smooth envelope Curve, so that a user cannot suddenly perceive the sudden drop of the background sound effect, and better listening experience can be brought.
In the process, the volume of the first audio is reduced within the first target duration, so that a user can concentrate on listening to the second audio (namely the emission sound effect of the virtual prop), the volume ratio of the first audio and the second audio can be dynamically adjusted in a self-adaptive manner along with the behavior change of the first virtual object, and therefore, in the real-time game process, the main sound information is highlighted, the secondary sound information is weakened, the listening experience of audio mixing is greatly improved, and the audio playing effect is optimized.
In an exemplary scenario, if the first target duration is less than the playing duration of the second audio, at this time, after the terminal gradually reduces the volume of the first audio to the target volume, the first audio is still played at the target volume without being continuously reduced, so that the user is prevented from hearing the background sound effect of the outdoor virtual environment.
In an exemplary scenario, if the second target duration is equal to the playing duration of the second audio, at this time, as the second audio is gradually played, the volume of the first audio is gradually reduced, and until the second audio is played, the reduction of the volume of the first audio is stopped, so that the real environment can be met, as the engagement state is more and more intense, the human ear has higher capturing capability for the transmitting sound effect with a higher sound pressure level, and at this time, the continuous background sound effect can be slowly ignored.
In other embodiments, the terminal may decrease the volume of the first audio based on a side chain effector in addition to the auto-evasion function. Optionally, the terminal modulates the volume of the first audio based on the side-chain effector in response to the side-chain effector detecting the level signal of the second audio, wherein the volume of the modulated first audio is smaller than the volume of the first audio before modulation. The side chain control of the volume can be realized by using the side chain effector to reduce the volume, that is, by using a Real-Time Parameter control (RTPC) to correlate the output level.
Alternatively, before modulating the volume, the terminal may make the following initialization settings: creating a control Parameter (Game Parameter) associated with the output level value; mounting the side chain effector on the second audio, that is, mounting the side chain effector on an input object (second audio) of the side face signal; the control parameter is associated with the volume of the modulated object of the side chain effector, that is, the control parameter is associated with the modulated object (first audio) at the output end of the side chain.
The difference between the RTPC method and the auto-evasion method is that the auto-evasion method monitors the playing event of the second audio to determine whether to compress the first audio in volume, however, if there is a blank sound segment (commonly called a mute period) at the beginning or end of the second audio, the volume of the first audio is still reduced when the second audio starts playing the blank segment, which may bring the user with the problem of suddenly weakening the background sound effect, while the RTPC method determines whether to start volume compression by detecting whether there is a level signal of the second audio, so that the volume compression of the first audio can be completed synchronously at the moment when the level signal is generated by the second audio in real time, although the operation is slightly more complicated than the auto-evasion method, the real-time performance is better.
In some embodiments, the terminal increases the volume of the first audio by the first decibel within a second target duration after the end time of playing the second audio. That is to say, the terminal restores the volume of the first audio to the playing volume before the second audio is played, so that the volume of the background sound effect can be increased in time to restore the volume to a normal value after the first virtual object finishes shooting the virtual item, and the game can bring stronger substitution and reality in the sound effect.
In some embodiments, the terminal, in response to the virtual prop hitting the second virtual object, plays a third audio, where the third audio is a hit prompt audio of the virtual prop, and the hit prompt audio is a prompt sound generated by the first virtual object hitting the second virtual object in cooperation with a visual sight-aiming expansion effect, and is used to prompt the virtual prop to hit the second virtual object; when the third audio is played, the action volume of the first virtual object and the volume of the second audio are reduced; and after the third video is played, the action volume and the volume of the second audio are increased.
In the process, when the terminal hits the virtual prop to the second virtual object, because the hit prompt sound effect (the third audio) usually has a low-middle frequency tone, the volume of the third audio is smaller than that of the second audio, and the hit prompt sound effect is easily covered by the second audio with a higher frequency in an aggressive battle.
In some embodiments, the terminal responds to a second virtual object in the outdoor virtual environment to send out an action, and plays a fourth audio, wherein the fourth audio comprises at least one of action sound effect or environment interaction sound effect of the second virtual object, for example, the fourth audio comprises footsteps sound, shaking friction sound of clothes or a backpack, and the like; in response to the first virtual object displacing in the outdoor virtual environment, reducing the volume of the fourth audio, e.g., the first virtual object displacing by a rush, jump, etc. action; and in response to the completion of the displacement of the first virtual object, increasing the volume of the fourth audio.
In the process, when the terminal controls the first virtual object to generate displacement (such as rapid running, jumping and the like), the action sound effect and the environment interaction sound effect of the first virtual object are played, and simultaneously the action sound effect and the environment interaction sound effect (fourth audio frequency) emitted by the second virtual object in the same outdoor virtual environment are pressed down, so that the capacity of the first virtual object for acquiring sound information during displacement is weakened, for some users with rich shooting game experience, because the users are very familiar with the topography of the outdoor virtual scene, the users are just as good as fingers for a high-price search area of the outdoor virtual scene, the users often find the most favorable battle place during opening or fighting, the exposure risk of the users can be improved by weakening the capacity of the first virtual object for acquiring the sound information during displacement, and the users are forced to decide during action, the game system can balance the game mechanism by giving up the rapid action and actively stopping to acquire information or selecting to actively strike and give up part of information acquisition.
All the above optional technical solutions can be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
According to the method provided by the embodiment of the application, the first audio serving as the background sound effect is played in the outdoor virtual environment, once the first virtual object starts shooting operation, the volume of the first audio is adaptively reduced by the machine in the process of playing the emission sound effect, so that a user can focus on listening to the second audio, a playing strategy of dynamically and intelligently adjusting the volume ratio of the first audio and the second audio along with the behavior change of the first virtual object is achieved, main sound information can be highlighted, secondary sound information can be weakened in the process of real-time game, and the audio playing effect is optimized.
Fig. 3 is a flowchart of an audio processing method provided in an embodiment of the present application, please refer to fig. 3, where the embodiment is applied to an electronic device, and is described by taking the electronic device as an example, where the terminal includes a first terminal or a second terminal in the above implementation environment, and the embodiment includes the following contents:
301. and the terminal responds to the situation that the first virtual object is positioned in the outdoor virtual environment, and plays a first audio which is a background sound effect of the outdoor virtual environment.
Step 301 is similar to step 201 and will not be described herein.
302. And the terminal responds to the first virtual object to transmit the virtual prop in the outdoor virtual environment and plays a second audio, wherein the second audio is the transmitting sound effect of the virtual prop.
Step 302 is similar to step 202, and is not described herein.
303. And the terminal responds to the play event of the second audio detected based on the automatic dodging function, and reduces the volume of the first audio by a first decibel according to the S-shaped audio control curve within a first target time length after the play starting moment of the second audio.
Optionally, the first target duration is any value greater than or equal to 0 and less than or equal to the playing duration of the second audio, for example, the first target duration is 1 second.
In the above process, when the first virtual object is in the non-engagement state, the volume of the first audio (environmental sound) is in the normal state, and when the first virtual object starts to launch the virtual property, since the second audio (gun sound, explosion sound, etc.) with a higher sound pressure level is played at the same time, the volume of the first audio is automatically reduced, in an example, taking the outdoor virtual environment as a forest, since the first audio includes the Loop layer of the environmental sound and the bird and cicada singing of the point sound source, the Loop layer of the environmental sound can be reduced when the gun sound or the explosion sound occurs, and the bird and cicada singing can be greatly reduced, optionally, based on the following step 304, when the gun sound or the explosion sound gradually disappears, the environmental sound reappears.
Fig. 4 is an interface schematic diagram of an outdoor virtual environment provided in an embodiment of the present application, please refer to fig. 4, taking an outdoor virtual environment 400 as a forest as an example, when the terminal displays the outdoor virtual environment 400, the terminal plays a background sound effect (i.e., a first audio frequency) of the outdoor virtual environment 400, in an example, the first audio frequency includes a Loop layer and a point sound source, the Loop layer is an environmental sound of the forest, the point sound source includes a bird song and a cicada song, the Loop layer is played in a circulating manner, and the point sound source is played in a periodic local manner, so that an auditory effect of the intermittent bird song and cicada song in the forest is achieved.
Optionally, the terminal provides the automatic evasion function based on the Wwise audio engine, so that the setting interface of the Wwise audio engine can be directly called to set the first target duration and the first decibel, and in addition, a volume control curve of the first audio can be set.
Fig. 5 is a schematic view of a setting interface of an automatic evasion function provided in an embodiment of the present application, please refer to fig. 5, taking a Wwise audio engine as an example, and providing a tab of an automatic evasion function 501 in the setting interface 500, where in the setting interface 500, a technician may set a value of a first target time length and a first decibel, and may also set a volume control Curve of the first audio frequency, and in the embodiment of the present application, only the first target time length is 1 second, the first decibel is 6 decibels, and the volume control Curve is S-currve as an example, but not limited thereto.
Fig. 6 is a schematic structural diagram of a first audio provided in an embodiment of the present application, please refer to fig. 6, and it can be seen in an interface 600 that, taking an outdoor virtual environment as a forest as an example, after layering the first audio, the first audio is composed of two major parts, namely, a point sound source random container material and a Loop layer mixed container material.
In step 303, a process is provided in which the terminal reduces the volume of the first audio based on an automatic evasion function, and the automatic evasion mode has the characteristics of simple operation flow and simple parameter setting, and can improve the audio processing efficiency. In other words, in response to detecting the play event of the second audio based on the auto-evasion function, the volume of the first audio is reduced according to the S-Curve, and of course, the terminal may also use a triangular waveform volume control Curve, a sinusoidal waveform volume control Curve, or a more smooth envelope Curve set by a technician in addition to the S-Curve, and the embodiment of the present application does not limit what volume control Curve is specifically used to reduce the volume of the first audio.
In some embodiments, the terminal may further reduce the volume of the first audio based on a side chain effector, where the side chain effector is capable of reducing the volume of the first audio in real time at the start time of playing the second audio, that is, when the first target duration is equal to 0, the side chain effector has a more real-time volume modulation effect.
Optionally, the terminal modulates the volume of the first audio based on the side-chain effector in response to the side-chain effector detecting the level signal of the second audio, wherein the volume of the modulated first audio is smaller than the volume of the first audio before modulation. The side chain control of the volume can be realized by using the side chain effector to reduce the volume, that is, by using a Real-Time Parameter control (RTPC) to correlate the output level.
Alternatively, before modulating the volume, the terminal may make the following initialization settings: creating a control Parameter (Game Parameter) associated with the output level value; mounting the side chain effector on the second audio, that is, mounting the side chain effector on an input object (second audio) of the side face signal; the control parameter is associated with the volume of the modulated object of the side chain effector, that is, the control parameter is associated with the modulated object (first audio) at the output end of the side chain.
The difference between the RTPC method and the auto-evasion method is that the auto-evasion method monitors the playing event of the second audio to determine whether to compress the first audio in volume, however, if there is a blank sound segment (commonly called a mute period) at the beginning or end of the second audio, the volume of the first audio is still reduced when the second audio starts playing the blank segment, which may bring the user with the problem of suddenly weakening the background sound effect, while the RTPC method determines whether to start volume compression by detecting whether there is a level signal of the second audio, so that the volume compression of the first audio can be completed synchronously at the moment when the level signal is generated by the second audio in real time, although the operation is slightly more complicated than the auto-evasion method, the real-time performance is better.
In the above step 303, a possible implementation manner that the terminal decreases the volume of the first audio by the first decibel within the first target time period after the play start time of the second audio is shown, in some embodiments, the terminal may also decrease the volume of the first audio by a specified change rate instead of decreasing the volume of the first audio by a certain fixed difference (first decibel), and the embodiment of the present application does not specifically limit the manner of decreasing the volume of the first audio.
In the process, when the terminal plays the second audio, the volume of the first audio is reduced, so that when the first audio and the second audio are played simultaneously, a user can concentrate on listening to the second audio (namely, the sound emission effect of the virtual prop), the volume ratio of the first audio and the second audio can be dynamically adjusted in a self-adaptive manner along with the behavior change of the first virtual object, so that main sound information can be highlighted and secondary sound information can be weakened in the real-time game process, the listening experience of audio mixing is greatly improved, and the audio playing effect is optimized.
304. And in response to the terminal detecting that the second audio is played completely based on the automatic dodging function, the terminal increases the volume of the first audio by the first decibel according to the S-shaped audio control curve within a second target time length after the playing ending moment of the second audio.
Optionally, the second target time period is any value greater than or equal to 0, for example, the second target time period is 0.5 seconds.
The manner of increasing the volume of the first audio in the step 304 is similar to that in the step 303, and may be implemented based on an automatic dodging function or based on a side chain effector, which is not described herein again. The embodiment of the present application does not specifically limit the manner of increasing the volume of the first audio.
Fig. 7 is a schematic flowchart of an audio processing method according to an embodiment of the present application, please refer to fig. 7, as shown in 700, in step one, a terminal opens an opening to enter a game, and an ambient sound (a first audio) starts to be played; in the second step, when the player fires, the terminal detects whether the player is outdoors and whether outdoor environment sound is played, if the player plays the outdoor environment sound, the third step is executed, and if not, the process is quitted; in the third step, the terminal reduces the ambient sound by 6 decibels within 1 second after the shot is triggered, the used volume control Curve is S-Curve, namely the terminal reduces the volume of the first audio by 6 decibels within 1 second after the second audio is played; in the fourth step, the terminal restores the ambient sound to the normal volume within 0.5 second after the gunshot is finished, the used volume control Curve is S-Curve, that is, the terminal restores the volume of the first audio to the normal value within 0.5 second after the second audio is played.
All the above optional technical solutions can be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
According to the method provided by the embodiment of the application, the first audio serving as the background sound effect is played in the outdoor virtual environment, once the first virtual object starts shooting operation, the volume of the first audio is adaptively reduced by the machine in the process of playing the emission sound effect, so that a user can focus on listening to the second audio, a playing strategy of dynamically and intelligently adjusting the volume ratio of the first audio and the second audio along with the behavior change of the first virtual object is achieved, main sound information can be highlighted, secondary sound information can be weakened in the process of real-time game, and the audio playing effect is optimized.
In the above embodiment, it is shown that, in an outdoor virtual environment, if a first virtual object is in a battle state, the volume of a first audio is dynamically reduced, so that the volume of a second audio is more prominent, and an immersive auditory experience is brought to a user.
Fig. 8 is a flowchart of an audio processing method provided in an embodiment of the present application, please refer to fig. 8, where the embodiment is applied to an electronic device, and is described by taking the electronic device as an example, where the terminal includes a first terminal or a second terminal in the above implementation environment, and the embodiment includes the following contents:
801. and the terminal responds to the hit of the virtual prop transmitted by the first virtual object to the second virtual object and plays a third audio, wherein the third audio is a hit prompt sound effect of the virtual prop.
It should be noted that, after the step 801 is executed after the step 302, that is, after the virtual item is launched from the first virtual object, if the launched virtual item hits the second virtual object, the third audio is played.
In some embodiments, if the virtual prop is located within the collision detection range of any second virtual object, it is determined that the virtual prop hits the any second virtual object, a prop ID of the virtual prop is used as an index, a hit prompt sound effect corresponding to the prop ID is queried in an audio library, the queried hit prompt sound effect is acquired as a third audio, and a play control is invoked to play the third audio.
It should be noted that the playing mode of the third audio is similar to the playing mode of the first audio in step 201, and is not described herein again.
Fig. 9 is an interface schematic diagram of an outdoor virtual environment according to an embodiment of the present application, please refer to fig. 9, where an outdoor virtual environment 900 is taken as a forest as an example, a terminal controls a first virtual object to transmit a virtual prop 901 to a second virtual object, and if the virtual prop 901 hits any second virtual object, a visual sight expansion effect is presented in the outdoor virtual environment 900 at this time, and a third audio is triggered to be played.
802. And in response to the detection of the playing event of the third audio based on the automatic dodging function, the terminal reduces the action volume of the first virtual object and the volume of the second audio according to the S-shaped audio control curve within a third target time length after the playing starting moment of the third audio.
Optionally, the third target duration is any value greater than or equal to 0 and less than or equal to the playing duration of the third audio, for example, the third target duration is 0.5 seconds.
In step 802, a possible implementation manner is provided for decreasing the action volume of the first virtual object and the volume of the second audio when the terminal plays the third audio, that is, decreasing the volume based on the auto-evasion function, which is similar to step 303 above, and is not described herein again. Of course, the terminal may also reduce the volume based on the side chain effector, similar to the above-mentioned alternative manner in step 303, which is not described herein again.
Fig. 10 is a schematic view of a setting interface of an automatic evasion function provided in an embodiment of the present application, please refer to fig. 10, taking a Wwise audio engine as an example, and providing a tab of an automatic evasion function 1001 in the setting interface 1000, a technician can set a third target time length, and can also set a decrease range of an action volume and a volume of a second audio, and in addition, can also set a volume control Curve of the action volume and the volume of the second audio, in the embodiment of the present application, the third target time length is only 0.5 second, the decrease range of the action volume is 3 db, the decrease range of the volume of the second audio is 3 db, and the volume control Curve is S-Curve as an example, but is not limited thereto.
803. And in response to the terminal detecting that the third audio is played completely based on the automatic dodging function, the terminal raises the action volume and the volume of the second audio according to the S-shaped audio control curve within a fourth target time length after the playing ending moment of the third audio.
Optionally, the fourth target time period is any value greater than or equal to 0, for example, the fourth target time period is 0.5 seconds.
In the step 803, a possible implementation manner of increasing the action volume and the volume of the second audio after the third video is played is provided, that is, the volume is increased based on the auto-evasion function, and of course, the terminal may also increase the volume based on the side chain effector, which is similar to the step 304 and is not described herein again. The embodiment of the present application does not specifically limit the manner of increasing the volume of the motion and the volume of the second audio.
Fig. 11 is a schematic flowchart of an audio processing method according to an embodiment of the present application, please refer to fig. 11, and as shown in 1100, in step one, when the terminal controls the first virtual object to fire, it detects whether to hit an enemy (i.e., the second virtual object); in the second step, if an enemy is hit, the terminal reduces the action volume of the first virtual object by 3 decibels within 0.5 second after the hitting feedback sound appears, and reduces the gunshot volume of the first virtual object by 3 decibels within 0.5 second, wherein the used volume control Curve is S-Curve, namely the terminal reduces the action volume and the gunshot volume of the first virtual object by 3 decibels within 0.5 second after the third audio is played; in the third step, the terminal restores the action volume and the gunshot volume to normal volumes within 0.5 second after the completion of the playing of the hit feedback sound, and the used volume control Curve is S-Curve, namely the terminal restores the action volume and the gunshot volume to normal values within 0.5 second after the completion of the playing of the third audio.
All the above optional technical solutions can be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
According to the method provided by the embodiment of the application, because the hit prompt sound effect (the third audio) is usually in a low-middle frequency tone, the volume of the third audio is smaller than that of the second audio, and the third audio is easily covered by the second audio with a higher frequency in violent combat, so that the volume of the second audio is reduced when the third audio is played, and the volume of the second audio is recovered after the third audio is played, so that the user can be prevented from missing the hit prompt sound effect, the sound fed back by a system when the second virtual object is hit is better highlighted, and the user can timely know that the virtual prop hits the second virtual object.
In the above embodiment, it is shown that when the virtual item transmitted by the first virtual object is controlled by the terminal to hit the second virtual object, the action volume of the first virtual object and the volume of the second audio (such as a gunshot) are dynamically reduced, so that the volume of the third audio (i.e., the hit prompt sound) is more prominent, and the user is prevented from missing the hit prompt sound.
Fig. 12 is a flowchart of an audio processing method provided in an embodiment of the present application, please refer to fig. 12, where the embodiment is applied to an electronic device, and is described by taking the electronic device as an example, where the terminal includes a first terminal or a second terminal in the above implementation environment, and the embodiment includes the following contents:
1201. and the terminal responds to a second virtual object in the outdoor virtual environment to send out action and plays a fourth audio, wherein the fourth audio comprises at least one of action sound effect or environment interaction sound effect of the second virtual object.
In some embodiments, when the terminal detects an action event of the second virtual object, it determines that the second virtual object sends an action, generates a fourth audio according to the action information of the second virtual object, where the fourth audio is used to represent at least one of an action sound effect or an environment interaction sound effect of the second virtual object, and invokes the play control to play the fourth audio.
It should be noted that the playing mode of the fourth audio is similar to the playing mode of the third audio in step 801, and is not described herein again.
1202. And the terminal responds to the displacement of the first virtual object in the outdoor virtual environment and reduces the volume of the fourth audio.
In one embodiment, the terminal generates displacement in response to the first virtual object, plays the displacement sound effect, and if a play event of the displacement sound effect is detected based on the auto-evasion function, reduces the volume of the fourth audio according to the S-Curve within a fifth target duration after the play start time of the displacement sound effect. Optionally, the fifth target time duration is greater than or equal to 0 and less than or equal to any value of the playing time duration of the displacement sound effect, for example, the fifth target time duration is 0.5 seconds. Optionally, the type of the displacement includes running, jumping, diving, and the like, in one example, only the displacement conforming to the target type triggers playing of the displacement sound effect, for example, the target type includes at least one of running or jumping, and the displacement type is not specifically limited in this embodiment of the application.
Fig. 13 is a schematic view of a setting interface of an automatic evasion function provided in this embodiment of the application, please refer to fig. 13, taking a Wwise audio engine as an example, and providing a tab of an automatic evasion function 1301 in the setting interface 1300, a technician can set a fifth target time length and can also set a volume reduction range of a fourth audio frequency, and in addition, can also set a volume control Curve of the fourth audio frequency, in this embodiment of the application, only the fifth target time length is 0.5 seconds, the volume reduction range of the fourth audio frequency is 6 decibels, and the volume control Curve is S-Curve as an example, but is not limited thereto.
Step 1202 is similar to step 802, and is not described herein.
1203. And the terminal responds to the completion of the displacement of the first virtual object and increases the volume of the fourth audio.
In some embodiments, the terminal stops playing the displacement sound effect in response to the completion of the displacement of the first virtual object, and if the completion of the playing of the displacement sound effect is detected based on the auto-evasion function, the volume of the fourth audio is increased according to the S-dark within a sixth target duration after the playing completion time of the displacement sound effect. Optionally, the sixth target time period is any value greater than or equal to 0, for example, the sixth target time period is 0.5 seconds.
Step 1203 is similar to step 803, and is not described in detail here.
Fig. 14 is a schematic flowchart of an audio processing method according to an embodiment of the present application, please refer to fig. 14, and as shown in 1400, in step one, when the terminal controls the first virtual object to start running at a fast speed, it detects whether there is a motion sound emitted by the second virtual object around the first virtual object; in the second step, if there is an action sound emitted by the second virtual object, the action volume of the second virtual object is reduced by 6 db within 0.5 second after the first virtual object starts running rapidly, and the environment interaction volume is reduced by 6 db within 0.5 second, the used volume control Curve is S-Curve, that is, the terminal reduces the volume of the fourth audio by 6 db within 0.5 second after the first virtual object is displaced; in the third step, within 0.5 second after the first virtual object runs rapidly, the action volume and the environment interaction volume of the second virtual object are restored to normal volume, the used volume control Curve is S-Curve, that is, the terminal restores the volume of the fourth audio to a normal value within 0.5 second after the first virtual object finishes moving.
All the above optional technical solutions can be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
The method provided by the embodiment of the application can reduce the action sound effect and the environment interaction sound effect (fourth audio) emitted by the second virtual object in the same outdoor virtual environment while playing the action sound effect and the environment interaction sound effect of the first virtual object by controlling the first virtual object to generate displacement (such as rapid running, jumping and the like), thereby weakening the capability of the first virtual object for acquiring sound information when generating displacement, so that for some users with rich shooting game experience, because the users are very familiar with the topography of the outdoor virtual scene, the users can be fingered for a high-price search area of the outdoor virtual scene, the users can quickly find the most favorable battle place when opening or fighting, and the exposure risk of the users can be improved by weakening the capability of the first virtual object for acquiring the sound information when generating displacement, the user is forced to decide when acting, and the game mechanism can be balanced by either giving up quick action, actively stopping to acquire information, or actively striking and giving up a part of information acquisition.
Fig. 15 is a schematic structural diagram of an audio processing apparatus according to an embodiment of the present application, please refer to fig. 15, where the apparatus includes:
the playing module 1501 is configured to play a first audio in response to that the first virtual object is located in the outdoor virtual environment, where the first audio is a background sound effect of the outdoor virtual environment;
the playing module 1501 is further configured to respond to the first virtual object to emit a virtual item in the outdoor virtual environment, and play a second audio, where the second audio is an emission sound effect of the virtual item;
the decreasing module 1502 is configured to decrease the volume of the first audio when the second audio is played.
The device provided by the embodiment of the application plays the first audio serving as the background sound effect in the outdoor virtual environment, once the first virtual object initiates the shooting operation, the volume of the first audio is adaptively reduced by the machine in the process of playing the emission sound effect, so that a user focuses attention on listening to the second audio, a play strategy of dynamically and intelligently adjusting the volume ratio of the first audio and the second audio along with the behavior change of the first virtual object is achieved, main sound information can be highlighted, secondary sound information can be weakened in the process of real-time game, and the audio play effect is optimized.
In a possible implementation, based on the apparatus components of fig. 15, the reduction module 1502 includes:
a first reducing unit configured to reduce a volume of the first audio based on an automatic dodging function; or the like, or, alternatively,
a second reducing unit for reducing the volume of the first audio based on the side chain effector.
In a possible embodiment, the second reduction unit is configured to:
and in response to the side chain effector detecting the level signal of the second audio, modulating the volume of the first audio based on the side chain effector, wherein the volume of the modulated first audio is less than the volume of the first audio before modulation.
In a possible embodiment, based on the apparatus composition of fig. 15, the apparatus further comprises:
a creation module for creating a control parameter associated with the output level value;
the mounting module is used for mounting the side chain effector on the second audio frequency;
and the correlation module is used for correlating the control parameter with the volume of the modulated object of the side chain effector.
In a possible embodiment, the first reduction unit is configured to:
and in response to detecting the playing event of the second audio based on the automatic dodging function, reducing the volume of the first audio according to the S-shaped audio control curve.
In one possible implementation, the reduction module 1502 is configured to:
and within a first target time length after the playing starting time of the second audio, reducing the volume of the first audio by a first decibel, wherein the first target time length is less than or equal to the playing time length of the second audio.
In a possible embodiment, based on the apparatus composition of fig. 15, the apparatus further comprises:
and the lifting module is used for lifting the volume of the first audio by the first decibel within a second target time length after the playing ending moment of the second audio.
In one possible implementation, the playing module 1501 is further configured to: responding to the virtual prop hitting a second virtual object, and playing a third audio which is a hitting prompt sound effect of the virtual prop;
the reduction module 1502 is further configured to: when the third audio is played, the action volume of the first virtual object and the volume of the second audio are reduced;
the lifting module is further configured to lift the action volume and the volume of the second audio after the third video is played.
In one possible implementation, the playing module 1501 is further configured to: responding to a second virtual object in the outdoor virtual environment to send out action, and playing fourth audio, wherein the fourth audio comprises at least one of action sound effect or environment interaction sound effect of the second virtual object;
the reduction module 1502 is further configured to: in response to the first virtual object displacing in the outdoor virtual environment, decreasing the volume of the fourth audio;
the lifting module is further configured to: and in response to the completion of the displacement of the first virtual object, increasing the volume of the fourth audio.
All the above optional technical solutions can be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
It should be noted that: in the audio processing apparatus provided in the above embodiment, when processing audio, only the division of the above functional modules is taken as an example, and in practical applications, the above function distribution can be completed by different functional modules according to needs, that is, the internal structure of the electronic device is divided into different functional modules to complete all or part of the above described functions. In addition, the audio processing apparatus and the audio processing method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the audio processing method embodiments and are not described herein again.
Fig. 16 is a schematic structural diagram of an electronic device according to an embodiment of the present application. Referring to fig. 16, taking an electronic device as a terminal 1600 for illustration, optionally, the device type of the terminal 1600 includes: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1600 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
Generally, terminal 1600 includes: a processor 1601, and a memory 1602.
Optionally, processor 1601 includes one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. Alternatively, the processor 1601 is implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). In some embodiments, the processor 1601 includes a main processor and a coprocessor, the main processor is a processor for Processing data in an awake state, also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1601 is integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, the processor 1601 further includes an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
In some embodiments, memory 1602 includes one or more computer-readable storage media, which are optionally non-transitory. Optionally, memory 1602 also includes high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1602 is used to store at least one computer program for execution by the processor 1601 to implement the audio processing methods provided by the various embodiments herein.
In some embodiments, the terminal 1600 may also optionally include: peripheral interface 1603 and at least one peripheral. Processor 1601, memory 1602 and peripheral interface 1603 can be connected via a bus or signal line. Various peripheral devices may be connected to peripheral interface 1603 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1604, a display 1605, a camera assembly 1606, an audio circuit 1607, and a power supply 1609.
Peripheral interface 1603 can be used to connect at least one I/O (Input/Output) related peripheral to processor 1601 and memory 1602. In some embodiments, processor 1601, memory 1602, and peripheral interface 1603 are integrated on the same chip or circuit board; in some other embodiments, any one or both of processor 1601, memory 1602 and peripheral interface 1603 are implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 1604 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1604 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 1604 converts the electrical signal into an electromagnetic signal to be transmitted, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1604 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. Optionally, the radio frequency circuit 1604 communicates with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1604 further includes NFC (Near Field Communication) related circuits, which are not limited in this application.
The display 1605 is used to display a UI (User Interface). Optionally, the UI includes graphics, text, icons, video, and any combination thereof. When the display screen 1605 is a touch display screen, the display screen 1605 also has the ability to capture touch signals on or over the surface of the display screen 1605. The touch signal can be input to the processor 1601 as a control signal for processing. Optionally, the display 1605 is also used to provide virtual buttons and/or a virtual keyboard, also known as soft buttons and/or a soft keyboard. In some embodiments, the display 1605 is one, providing the front panel of the terminal 1600; in other embodiments, there are at least two display screens 1605, which are respectively disposed on different surfaces of the terminal 1600 or are in a foldable design; in still other embodiments, display 1605 is a flexible display disposed on a curved surface or folded surface of terminal 1600. Even more optionally, the display 1605 is arranged in a non-rectangular irregular pattern, i.e. a shaped screen. Optionally, the Display 1605 is made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1606 is used to capture images or video. Optionally, camera assembly 1606 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1606 further includes a flash. Optionally, the flash is a monochrome temperature flash, or a bi-color temperature flash. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp and is used for light compensation under different color temperatures.
In some embodiments, the audio circuitry 1607 includes a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1601 for processing or inputting the electric signals to the radio frequency circuit 1604 to achieve voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones are respectively arranged at different positions of the terminal 1600. Optionally, the microphone is an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1601 or the radio frequency circuit 1604 into sound waves. Alternatively, the speaker is a conventional membrane speaker, or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to human, but also the electric signal can be converted into a sound wave inaudible to human for use in distance measurement or the like. In some embodiments, the audio circuit 1607 also includes a headphone jack.
Power supply 1609 is used to provide power to the various components of terminal 1600. Optionally, power supply 1609 is alternating current, direct current, a disposable battery, or a rechargeable battery. When power supply 1609 comprises a rechargeable battery, the rechargeable battery supports wired or wireless charging. The rechargeable battery is also used to support fast charge technology.
In some embodiments, terminal 1600 also includes one or more sensors 1610. The one or more sensors 1610 include, but are not limited to: acceleration sensor 1611, gyro sensor 1612, pressure sensor 1613, optical sensor 1615, and proximity sensor 1616.
In some embodiments, acceleration sensor 1611 detects acceleration in three coordinate axes of a coordinate system established with terminal 1600. For example, the acceleration sensor 1611 is used to detect components of the gravitational acceleration in three coordinate axes. Alternatively, the processor 1601 controls the display screen 1605 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1611. The acceleration sensor 1611 is also used for acquisition of motion data of a game or a user.
In some embodiments, the gyroscope sensor 1612 detects the body direction and the rotation angle of the terminal 1600, and the gyroscope sensor 1612 and the acceleration sensor 1611 cooperate to acquire the 3D motion of the user on the terminal 1600. The processor 1601 is configured to perform the following functions according to the data collected by the gyro sensor 1612: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Optionally, pressure sensors 1613 are disposed on the side bezel of terminal 1600 and/or underlying display 1605. When the pressure sensor 1613 is disposed on the side frame of the terminal 1600, the holding signal of the user to the terminal 1600 can be detected, and the processor 1601 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1613. When the pressure sensor 1613 is disposed at the lower layer of the display 1605, the processor 1601 controls the operability control on the UI interface according to the pressure operation of the user on the display 1605. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The optical sensor 1615 is used to collect ambient light intensity. In one embodiment, the processor 1601 controls the display brightness of the display screen 1605 based on the ambient light intensity collected by the optical sensor 1615. Specifically, when the ambient light intensity is high, the display luminance of the display screen 1605 is increased; when the ambient light intensity is low, the display brightness of the display screen 1605 is adjusted down. In another embodiment, the processor 1601 is further configured to dynamically adjust the shooting parameters of the camera assembly 1606 based on the ambient light intensity collected by the optical sensor 1615.
A proximity sensor 1616, also referred to as a distance sensor, is typically disposed on the front panel of terminal 1600. The proximity sensor 1616 is used to collect the distance between the user and the front surface of the terminal 1600. In one embodiment, the processor 1601 controls the display 1605 to switch from the light screen state to the clear screen state when the proximity sensor 1616 detects that the distance between the user and the front surface of the terminal 1600 is gradually decreased; when the proximity sensor 1616 detects that the distance between the user and the front surface of the terminal 1600 is gradually increased, the display 1605 is controlled by the processor 1601 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 16 is not intended to be limiting of terminal 1600, and can include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
In an exemplary embodiment, a computer-readable storage medium, such as a memory including at least one computer program, which is executable by a processor in a terminal to perform the audio processing method in the above-described embodiments, is also provided. For example, the computer-readable storage medium includes a ROM (Read-Only Memory), a RAM (Random-Access Memory), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product or computer program is also provided, comprising one or more program codes, the one or more program codes being stored in a computer readable storage medium. The one or more program codes can be read from the computer-readable storage medium by one or more processors of the electronic device, and the one or more processors execute the one or more program codes, so that the electronic device can execute to complete the audio processing method in the above-described embodiments.
Those skilled in the art will appreciate that all or part of the steps for implementing the above embodiments can be implemented by hardware, or can be implemented by a program instructing relevant hardware, and optionally, the program is stored in a computer readable storage medium, and optionally, the above mentioned storage medium is a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A method of audio processing, the method comprising:
displaying a first virtual object and a shooting control of a virtual prop in a virtual environment of game play;
under the condition that the environment type of the virtual environment is an outdoor virtual environment, inquiring a first audio frequency which is stored correspondingly to an environment identifier from an audio frequency library by taking the environment identifier of the outdoor virtual environment as an index, wherein the first audio frequency is a background sound effect of the outdoor virtual environment;
calling an audio playing control to play the first audio;
in response to the triggering operation of the first virtual object on the shooting control, controlling the first virtual object to emit a virtual prop in the outdoor virtual environment;
using the item identification of the virtual item as an index, and inquiring a second audio frequency which is stored correspondingly to the item identification from an audio frequency library, wherein the second audio frequency is a transmitting sound effect of the virtual item;
calling the audio playing control to play the second audio;
in response to detecting a play event of the second audio based on an automatic dodging function, within a first target time length after a play starting moment of the second audio, reducing the volume of the first audio by a first decibel according to an S-shaped audio control curve, wherein the first target time length is less than or equal to the play time length of the second audio;
and in response to detecting that the second audio is played completely based on the automatic dodging function, increasing the volume of the first audio by the first decibel according to an S-shaped audio control curve within a second target time length after the playing ending moment of the second audio.
2. The method of claim 1, further comprising:
based on the side chain effector, the volume of the first audio is reduced.
3. The method of claim 2, wherein the reducing the volume of the first audio based on the side chain effector comprises:
in response to the side-chain effector detecting the level signal of the second audio, modulating the volume of the first audio based on the side-chain effector, wherein the volume of the modulated first audio is less than the volume of the first audio before modulation.
4. The method of claim 2 or 3, wherein before the side-chain-effector-based reduction of the volume of the first audio, the method further comprises:
creating a control parameter associated with the output level value;
mounting the side chain effector on the second audio;
associating the control parameter with a volume of a modulated object of the side chain effector.
5. The method of claim 1, further comprising:
responding to the fact that the virtual prop hits a second virtual object, and playing a third audio, wherein the third audio is a hit prompt sound effect of the virtual prop;
when the third audio is played, the action volume of the first virtual object and the volume of the second audio are reduced;
and after the third audio is played, increasing the action volume and the volume of the second audio.
6. The method of claim 1, further comprising:
responding to a second virtual object in the outdoor virtual environment to send out an action, and playing a fourth audio, wherein the fourth audio comprises at least one of an action sound effect or an environment interaction sound effect of the second virtual object;
decreasing the volume of the fourth audio in response to the first virtual object producing a displacement in the outdoor virtual environment;
and in response to the completion of the displacement of the first virtual object, increasing the volume of the fourth audio.
7. An audio processing apparatus, characterized in that the apparatus comprises:
the playing module is used for displaying the first virtual object and the shooting control of the virtual prop in the virtual environment of game play; under the condition that the environment type of the virtual environment is an outdoor virtual environment, inquiring a first audio frequency which is stored correspondingly to an environment identifier from an audio frequency library by taking the environment identifier of the outdoor virtual environment as an index, wherein the first audio frequency is a background sound effect of the outdoor virtual environment; calling an audio playing control to play the first audio;
the playing module is further configured to control the first virtual object to launch a virtual item in the outdoor virtual environment in response to a triggering operation of the first virtual object on the shooting control; using the item identification of the virtual item as an index, and inquiring a second audio frequency which is stored correspondingly to the item identification from an audio frequency library, wherein the second audio frequency is a transmitting sound effect of the virtual item; calling the audio playing control to play the second audio;
the reduction module comprises a first reduction unit and a second reduction unit, wherein the first reduction unit is used for responding to the detection of the playing event of the second audio based on the automatic dodging function, reducing the volume of the first audio by a first decibel according to an S-shaped audio control curve within a first target time length after the playing starting moment of the second audio, and the first target time length is less than or equal to the playing time length of the second audio;
and the lifting module is used for responding to the detection that the second audio is played completely based on the automatic dodging function, and lifting the volume of the first audio by the first decibel according to an S-shaped audio control curve within a second target time length after the playing ending moment of the second audio.
8. The apparatus of claim 7, wherein the lowering module further comprises:
a second reducing unit configured to reduce the volume of the first audio based on a side chain effector.
9. The apparatus of claim 8, wherein the second reducing unit is configured to:
in response to the side-chain effector detecting the level signal of the second audio, modulating the volume of the first audio based on the side-chain effector, wherein the volume of the modulated first audio is less than the volume of the first audio before modulation.
10. The apparatus of claim 8 or 9, further comprising:
a creation module for creating a control parameter associated with the output level value;
the mounting module is used for mounting the side chain effector on the second audio frequency;
and the association module is used for associating the control parameter with the volume of the modulated object of the side chain effector.
11. The apparatus of claim 7, wherein the playback module is further configured to: responding to the fact that the virtual prop hits a second virtual object, and playing a third audio, wherein the third audio is a hit prompt sound effect of the virtual prop;
the reduction module is further configured to: when the third audio is played, the action volume of the first virtual object and the volume of the second audio are reduced;
the device further comprises a lifting module used for lifting the action volume and the volume of the second audio after the third audio is played.
12. The apparatus of claim 7, wherein the playback module is further configured to: responding to a second virtual object in the outdoor virtual environment to send out an action, and playing a fourth audio, wherein the fourth audio comprises at least one of an action sound effect or an environment interaction sound effect of the second virtual object;
the reduction module is further configured to: decreasing the volume of the fourth audio in response to the first virtual object producing a displacement in the outdoor virtual environment;
the apparatus further comprises a lifting module for: and in response to the completion of the displacement of the first virtual object, increasing the volume of the fourth audio.
13. An electronic device, comprising one or more processors and one or more memories having at least one computer program stored therein, the at least one computer program being loaded and executed by the one or more processors to implement the audio processing method of any of claims 1 to 6.
14. A storage medium having stored therein at least one computer program which is loaded and executed by a processor to implement the audio processing method according to any one of claims 1 to 6.
15. A computer program product, characterized in that the computer program product comprises at least one computer program which is loaded and executed by a processor to implement the audio processing method as claimed in any one of claims 1 to 6.
CN202011156439.5A 2020-10-26 2020-10-26 Audio processing method and device, electronic equipment and storage medium Active CN112221137B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011156439.5A CN112221137B (en) 2020-10-26 2020-10-26 Audio processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011156439.5A CN112221137B (en) 2020-10-26 2020-10-26 Audio processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112221137A CN112221137A (en) 2021-01-15
CN112221137B true CN112221137B (en) 2022-04-26

Family

ID=74109644

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011156439.5A Active CN112221137B (en) 2020-10-26 2020-10-26 Audio processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112221137B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112717395B (en) * 2021-01-28 2023-03-03 腾讯科技(深圳)有限公司 Audio binding method, device, equipment and storage medium
CN116208704A (en) * 2021-06-24 2023-06-02 北京荣耀终端有限公司 Sound processing method and device
CN113713371B (en) * 2021-08-31 2023-07-21 腾讯科技(深圳)有限公司 Music synthesis method, device, equipment and medium
JP7266142B1 (en) 2022-09-30 2023-04-27 株式会社Cygames Program, processing device and processing method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104133652A (en) * 2014-06-10 2014-11-05 腾讯科技(深圳)有限公司 Audio playing control method and terminal
CN109144610A (en) * 2018-08-31 2019-01-04 腾讯科技(深圳)有限公司 Audio frequency playing method, device, electronic device and computer readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120096778A (en) * 2011-02-23 2012-08-31 주식회사 두빅 Service system and method for multiplayer team match in multiplayer online first person shooting game

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104133652A (en) * 2014-06-10 2014-11-05 腾讯科技(深圳)有限公司 Audio playing control method and terminal
CN109144610A (en) * 2018-08-31 2019-01-04 腾讯科技(深圳)有限公司 Audio frequency playing method, device, electronic device and computer readable storage medium

Also Published As

Publication number Publication date
CN112221137A (en) 2021-01-15

Similar Documents

Publication Publication Date Title
CN112221137B (en) Audio processing method and device, electronic equipment and storage medium
CN110585710B (en) Interactive property control method, device, terminal and storage medium
CN110721468B (en) Interactive property control method, device, terminal and storage medium
CN111744186B (en) Virtual object control method, device, equipment and storage medium
CN111744184B (en) Control showing method in virtual scene, computer equipment and storage medium
CN112870709B (en) Display method and device of virtual prop, electronic equipment and storage medium
CN111330274B (en) Virtual object control method, device, equipment and storage medium
CN110917623B (en) Interactive information display method, device, terminal and storage medium
CN113289331B (en) Display method and device of virtual prop, electronic equipment and storage medium
CN112044084B (en) Virtual item control method, device, storage medium and equipment in virtual environment
CN112057857B (en) Interactive property processing method, device, terminal and storage medium
CN111408133A (en) Interactive property display method, device, terminal and storage medium
CN111672104B (en) Virtual scene display method, device, terminal and storage medium
CN112076469A (en) Virtual object control method and device, storage medium and computer equipment
CN112316421B (en) Equipment method, device, terminal and storage medium of virtual item
CN111760285B (en) Virtual scene display method, device, equipment and medium
CN113117331B (en) Message sending method, device, terminal and medium in multi-person online battle program
CN111672106B (en) Virtual scene display method and device, computer equipment and storage medium
CN111921190B (en) Prop equipment method, device, terminal and storage medium for virtual object
CN111589102B (en) Auxiliary tool detection method, device, equipment and storage medium
CN111651616B (en) Multimedia resource generation method, device, equipment and medium
CN111659122B (en) Virtual resource display method and device, electronic equipment and storage medium
CN110960849B (en) Interactive property control method, device, terminal and storage medium
CN111921200A (en) Virtual object control method and device, electronic equipment and storage medium
CN113730916B (en) Resource loading method, device, equipment and medium based on virtual environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40037813

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant