CN117563223A - Game audio processing method and device, storage medium and electronic device - Google Patents

Game audio processing method and device, storage medium and electronic device Download PDF

Info

Publication number
CN117563223A
CN117563223A CN202311369882.4A CN202311369882A CN117563223A CN 117563223 A CN117563223 A CN 117563223A CN 202311369882 A CN202311369882 A CN 202311369882A CN 117563223 A CN117563223 A CN 117563223A
Authority
CN
China
Prior art keywords
game
audio
scene
virtual game
note
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311369882.4A
Other languages
Chinese (zh)
Inventor
沈申易
温哲奇
孙超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202311369882.4A priority Critical patent/CN117563223A/en
Publication of CN117563223A publication Critical patent/CN117563223A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6063Methods for processing data by generating or executing the game program for sound processing
    • A63F2300/6081Methods for processing data by generating or executing the game program for sound processing generating an output signal, e.g. under timing constraints, for spatialization

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a game audio processing method and device, a storage medium and an electronic device. The method comprises the following steps: responding to the movement control operation of the virtual game characters, and controlling the virtual game characters to move in the virtual game scene; in the moving process of the virtual game character, obtaining a target melody note triggered by the moving action of the virtual game character, wherein the target melody note is randomly selected from a note set corresponding to the moving action, and the note set is determined based on the adaptive original scene audio in the virtual game scene; and superposing and playing the target melody notes and the original scene audio. The method and the device solve the technical problems of low configuration flexibility and poor game immersion sense of game scene music caused by setting fixed scene music samples for the game scene in the related technology.

Description

Game audio processing method and device, storage medium and electronic device
Technical Field
The present application relates to the field of game audio technologies, and in particular, to a game audio processing method and apparatus, a storage medium, and an electronic device.
Background
In the development of three-dimensional games, game scene music not only can add emotion and atmosphere to the game, but also can enrich the story and roles of the game, and sound designers often design scene music with higher relevance to map scenes to improve the immersion of players. In the related art, when a graphic interface sound design tool program (Fmod designer) is used to configure game scene music of various presentation modes, a fixed scene music sample is set for each scene, when a player triggers a sound event, such as through different areas in a large scene or when a monster is encountered in a game, the tone of the game is upgraded, and the Fmod designer switches the game scene music by reading different sound events. However, in this music presentation mode, since there is a lack of scene music interaction highly binding with player behavior in the game, the player cannot actively change the change of the game scene music, thereby affecting the player's game immersion and game interest.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
At least some embodiments of the present application provide a game audio processing method, apparatus, storage medium, and electronic device, so as to at least solve the technical problems of low configuration flexibility of game scene music and poor game immersion caused by setting a fixed scene music sample for a game scene in the related art.
According to one embodiment of the present application, there is provided a game audio processing method including: responding to the movement control operation of the virtual game characters, and controlling the virtual game characters to move in the virtual game scene; in the moving process of the virtual game character, obtaining a target melody note triggered by the moving action of the virtual game character, wherein the target melody note is randomly selected from a note set corresponding to the moving action, and the note set is determined based on the adaptive original scene audio in the virtual game scene; and superposing and playing the target melody notes and the original scene audio.
There is also provided, in accordance with an embodiment of the present application, a game audio processing device including: a control module for controlling the virtual game character to move in the virtual game scene in response to a movement control operation of the virtual game character; the acquisition module is used for acquiring target melody notes triggered by the movement behaviors of the virtual game characters in the movement process of the virtual game characters, wherein the target melody notes are randomly selected from note sets corresponding to the movement behaviors, and the note sets are determined based on the adaptive original scene audio in the virtual game scene; and the processing module is used for performing superposition playing on the target melody notes and the original scene audio.
According to one embodiment of the present application, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to execute the game audio processing method in any one of the above-mentioned claims when running.
There is further provided, in accordance with an embodiment of the present application, an electronic device including a memory having a computer program stored therein and a processor configured to run the computer program to perform the game audio processing method of any one of the above.
In at least some embodiments of the present application, movement of a virtual game character in a virtual game scene is controlled by responding to a movement control operation of the virtual game character, so that in the movement process of the virtual game character, a target melody note triggered by the movement behavior of the virtual game character is obtained, and finally, the target melody note and an original scene audio are played in a superimposed manner, thereby achieving the purpose of flexibly configuring game scene music based on the movement behavior of the virtual game character, further achieving the technical effects of improving the configuration flexibility of the game scene music and improving the game immersion, and further solving the technical problems of low configuration flexibility and poor game immersion of the game scene music caused by setting a fixed scene music sample for the game scene in the related art.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1 is a block diagram of a hardware architecture of a mobile terminal for a game audio processing method according to one embodiment of the present application;
FIG. 2 is a flow chart of a method of game audio processing according to one embodiment of the present application;
FIG. 3 is an example staff diagram of a longitudinal design according to one embodiment of the present application;
FIG. 4 is an example staff diagram of a lateral design according to one embodiment of the present application;
FIG. 5 is a schematic diagram of a game audio overlay play according to one embodiment of the present application;
FIG. 6 is a schematic diagram of a further game audio overlay play according to one embodiment of the present application;
FIG. 7 is a schematic diagram of a game audio processing method according to one embodiment of the present application;
FIG. 8 is a block diagram of a game audio processing device according to one embodiment of the present application;
fig. 9 is a schematic diagram of an electronic device according to one embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, partial terms or terminology appearing in describing embodiments of the present application are applicable to the following explanation:
audio middleware: refers to a set of software tools that provide audio processing functionality for game developers that can help the game developers design, implement, and manage audio in a game. Audio middleware provides a visual interface that allows developers to create and edit audio, sound effects, and music, and use them in games. In addition, audio middleware provides various tools and algorithms, such as three-dimensional sound effects, dynamic mixing, advanced interactive music, etc., which can help game developers achieve high-quality audio effects in games. The use of audio middleware may avoid the game developer from writing complex audio processing code itself, and may focus on other aspects of the game. Common audio middleware includes FMOD, wwise, criWare and the like.
Interaction: in the embodiment of the application, the interaction refers to game interaction, namely interaction behaviors generated by players with characters, props, environments and the like in the game world, such as clicking the characters to move the characters, clicking icons to open a certain interface, clicking buttons to start games and the like, belong to game interaction behaviors. In order for a player to go deep into a game and increase viscosity, game interaction needs to make the player obtain rich experience feeling in the game world, and bring a game experience different from reality, so that the game interaction pursues the emotional experience of the user.
FMOD: an audio middleware. FMOD audio middleware includes an underlying sound engine (FMOD Ex), a graphical interface sound design tool program (FMOD Designer), and an FMOD event system. Wherein FMOD Ex contains all underlying functions of FMOD, such as software mixers, digital signal processing (Digital Signal Processing Engine, DSP) engines, hardware interface output modules, three-dimensional functions, etc.; FMOD Designer is used to compose complex sound events and music for playback using FMOD event systems; the FMOD event system is an application layer built on the FMOD Ex and is used for playing the content created by the FMOD Designer tool, and the creation work is completed by a sound Designer through the FMOD event system and the FMOD Designer, so that a great deal of heavy work is processed, and the work of a programmer is greatly simplified.
Game scene music: in the development of three-dimensional games, it is often necessary to call an audio middleware through a game engine in a map scene to play game audio corresponding to a sound event, and if the game audio is music and the music is only played in the map scene, the game audio is called game scene music, and a sound designer usually designs the game scene music highly related to the map scene to improve the immersion of a player.
In the related art, when a graphic interface sound design tool program (Fmod designer) is used to configure game scene music of various presentation modes, a fixed scene music sample is set for each scene, when a player triggers a sound event, such as through different areas in a large scene or when a monster is encountered in a game, the tone of the game is upgraded, and the Fmod designer switches the game scene music by reading different sound events. However, in this music presentation mode, since there is a lack of scene music interaction highly binding with player behavior in the game, the player cannot actively change the change of the game scene music, thereby affecting the player's game immersion and game interest.
Specifically, general scene music switching often adopts gradual switching after state change, and a player cannot obtain scene music change experience highly related to own behaviors. In addition, conventional music interaction logic enters a certain regular repetition in listening feeling after a player experiences a game for a long time, the player easily prejudges the music to be played next, so that freshness is lost, and the music interaction is set and triggered at a fixed time point, so that substitution feeling of the player is not durable.
In summary, the related art has the technical problems of low flexibility of configuration of game scene music and poor immersion of games, and no effective solution has been proposed at present for the problems.
The above-described method embodiments to which the present disclosure relates may be performed in a mobile terminal, a computer terminal or similar computing device. Taking the mobile terminal as an example, the mobile terminal can be a smart phone, a tablet computer, a palm computer, a mobile internet device, a PAD, a game machine and other terminal devices. Fig. 1 is a block diagram of a hardware structure of a mobile terminal of a game audio processing method according to an embodiment of the present application. As shown in fig. 1, the mobile terminal may include one or more (only one is shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a Central Processing Unit (CPU), a Graphics Processor (GPU), a Digital Signal Processing (DSP) chip, a Microprocessor (MCU), a programmable logic device (FPGA), a neural Network Processor (NPU), a Tensor Processor (TPU), an Artificial Intelligence (AI) type processor, etc.) and a memory 104 for storing data, and in one embodiment of the present application, may further include: input output device 108 and display device 110.
In some optional embodiments, which are based on game scenes, the device may further provide a human-machine interaction interface with a touch-sensitive surface, where the human-machine interaction interface may sense finger contacts and/or gestures to interact with a Graphical User Interface (GUI), where the human-machine interaction functions may include the following interactions: executable instructions for performing the above-described human-machine interaction functions, such as creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, sending and receiving electronic mail, talking interfaces, playing digital video, playing digital music, and/or web browsing, are configured/stored in a computer program product or readable storage medium executable by one or more processors.
It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and not limiting of the structure of the mobile terminal described above. For example, the mobile terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
According to one embodiment of the present application, there is provided an embodiment of a game audio processing method, it being noted that the steps shown in the flowcharts of the drawings may be performed in a computer system such as a set of computer executable instructions, and although a logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in an order different from that herein.
Fig. 2 is a flowchart of a game audio processing method according to one embodiment of the present application, as shown in fig. 2, the method includes the steps of:
step S21, responding to the movement control operation of the virtual game character, and controlling the virtual game character to move in the virtual game scene;
step S22, in the moving process of the virtual game character, obtaining a target melody note triggered by the moving action of the virtual game character, wherein the target melody note is randomly selected from a note set corresponding to the moving action, and the note set is determined based on the adaptive original scene audio in the virtual game scene;
step S23, the target melody notes and the original scene audio are played in a superposition mode.
The above game audio processing method is applied to a plurality of different types of three-dimensional (3D) games, for example: 3D shooting type games, 3D athletic type games, 3D action type games, 3D survival type games, 3D magic type games, 3D fairy type games, 3D strategy type games, 3D adventure type games, and the like. Each of the plurality of different types of three-dimensional games corresponds to a different game music configuration. The game music configuration includes: a game map corresponding to each type of three-dimensional game, background music corresponding to the game map, and a note set corresponding to the game map. Taking a 3D racing type game as an example, a 3D racing game uses a racing game map, background music for racing environment adaptation, and a set of notes tailored to the racing environment. Taking 3D strategy type game as an example of 3D ancient battlefield game, the 3D ancient battlefield game uses an ancient battlefield game map, background music adapted to the ancient battlefield environment, and a note set customized for the ancient battlefield environment.
The above-mentioned movement control operation for the Virtual game character may be implemented by, but not limited to, a keyboard, a mouse, a game pad, a mobile terminal touch screen, and a Virtual Reality (VR) device. Specifically, when the keyboard is used for performing movement control operation on the virtual game role, the direction key on the keyboard can be used for controlling the movement direction of the virtual game role, and pressing the space key can control the jump of the virtual game role; when the mouse is used for carrying out movement control operation on the virtual game roles, the direction of the virtual game roles is controlled through the movement of the mouse, and the left click can attack or interact; when the game handle is used for carrying out movement control operation on the virtual game role, a rocker or a direction key of the game handle is used for controlling the movement of the virtual game role, and other buttons of the handle can be used for operations such as jumping, attack and the like; when the mobile terminal touch screen is used for carrying out movement control operation on the virtual game roles, for the games on the mobile equipment, the movement of the virtual game roles can be controlled through the touch screen, the virtual game roles can jump after sliding upwards, and the direction of the virtual game roles can be changed after sliding leftwards and rightwards; when a VR device is used to perform a movement control operation on a virtual game character, in VR, the movement of the virtual game character may be controlled by a head tracker, and a handle or a gesture recognizer may be used for other operations, such as attack, interaction, and the like.
It should be noted that, in the embodiment of the present application, the movement control operation of the virtual game character is given as an example, and a specific implementation manner may be selected and optimized according to the game requirement and the characteristics of the platform, which is not limited in the embodiment of the present application.
During movement of the virtual game character, the movement behavior of the virtual game character is associated with the terrain in the virtual game scene. For example, when the terrain within the virtual game scene is land, the movement behavior of the virtual game character may be jogging, fast walking, jogging, fast running, jumping, or the like; when the topography in the virtual game scene is in water, the movement behavior of the virtual game character can be floating, submerging and the like; the movement behavior of the virtual game character may be glide, dive, etc. when the terrain within the virtual game scene is in the air.
In an alternative implementation, the playing speed of the target melody note is determined based on the speed of movement of the virtual game character. For example, the greater the step frequency, i.e., the faster the movement speed, when the virtual game character walks on land, the faster the corresponding target melody note is played; the smaller the step frequency when the virtual game character walks on land, i.e., the slower the moving speed, the slower the playing speed of the corresponding target melody note.
The target melody notes are randomly selected from the note sets corresponding to the movement behaviors, different target melody notes can be determined based on different movement behaviors of the virtual game player, and the target melody notes and the original scene audio are overlapped and played, so that the movement behaviors of the virtual game roles can generate music interaction, and interaction changes can be created rapidly and in real time.
Specifically, when the virtual game character repeatedly enters the same region in the virtual game scene, different target melody notes are randomly selected from the note sets corresponding to the movement behaviors respectively. Taking 3D adventure type game as an example, when the virtual game roles repeatedly enter the same adventure gate in the virtual adventure game environment, different target melody notes are randomly selected from note sets corresponding to the adventure behaviors respectively, so that the randomness of playing game music is reflected. In addition, when the virtual game character leaves from the first region and enters into the second region in the virtual game scene, the target melody notes randomly selected for the first region are different from the target melody notes randomly selected for the second region. Taking 3D game of the type of the swordsman as an example, the first area may be a wild area of the game scene of the swordsman, and the second area may be a duplicate area of the game scene of the swordsman. When the virtual game character leaves from the wild area in the game scene of the swordlike knight-errant and enters the copy area, the target melody notes randomly selected for the wild area are different from the target melody notes randomly selected for the copy area, so that the randomness of playing the game music is reflected.
Based on the steps S21 to S23, the virtual game character is controlled to move in the virtual game scene by responding to the movement control operation of the virtual game character, so that the target melody notes triggered by the movement behavior of the virtual game character are obtained in the movement process of the virtual game character, and finally the target melody notes and the original scene audio are overlapped and played, so that the purpose of flexibly configuring the game scene music based on the movement behavior of the virtual game character is achieved, the technical effects of improving the configuration flexibility of the game scene music and improving the game immersion are achieved, and the technical problems of low configuration flexibility and poor game immersion of the game scene music caused by setting fixed scene music samples for the game scene in the related art are solved.
The game audio processing method in the embodiment of the application is further described below.
Optionally, in step S22, during the movement of the virtual game character, acquiring the target melody note triggered by the movement behavior of the virtual game character includes: and acquiring a target melody note triggered by the movement behavior of the virtual game character at a target moment, wherein the target moment is a moment determined according to a preset time interval in the game interaction process of the movement behavior and the virtual model in the virtual game scene.
The virtual model may be a specific block model in a virtual game scene, and the target time is a time determined according to a preset time interval in a game interaction process of the mobile behavior and the specific block model in the virtual game scene.
Continuing with the walking action example, when the footsteps of the virtual game character in the walking process fall on the specific land block model, game interaction can be generated, and the target moment can be the moment of each 3-8 steps in the game interaction process with the specific land block model, so that the target melody notes triggered once can be obtained when the virtual game character walks each 3-8 steps in each interval.
Optionally, during the movement of the virtual game character, acquiring the target melody note triggered by the movement behavior of the virtual game character includes: and responding to the movement behavior of the virtual game character to meet a preset triggering condition, and selecting a target melody note from a plurality of alternative melody notes contained in the note set, wherein the preset triggering condition is used for determining whether the virtual game character is positioned in a preset scene area in the virtual game scene and whether the preset scene area is configured with original scene audio.
The original scene audio is background music audio corresponding to a virtual game scene, the preset scene area is a specific game map or game land block, and the audio superposition function is in an on state within the preset scene area, so that the audio superposition function can be realized; and if the audio superposition function is in a closed state outside the preset scene area, the audio superposition function cannot be realized. When the movement behavior is located at a specific game map or game land in the virtual game scene at the target time and when the background music audio is being played at the target time, it is determined that the movement behavior of the virtual game character satisfies a preset trigger condition.
When the movement behavior of the virtual game character satisfies a preset trigger condition, selecting a target melody note from a plurality of alternative melody notes contained in a note set, the plurality of alternative melody notes being divided in units of notes and grouped according to a preset scale range. For example, a plurality of alternative melody notes are divided into segments (segments) by a single note, and the segments that lie in the same octave range are placed in the same group.
In an alternative embodiment, when the movement behavior of the virtual game character satisfies the preset trigger condition, the target melody notes are randomly selected from the plurality of alternative melody notes included in the note set, thereby ensuring randomness and diversity of the generated melody.
Based on the above-mentioned alternative embodiment, by responding to the movement behavior of the virtual game character to meet the preset trigger condition, the target melody notes can be quickly selected from the plurality of alternative melody notes contained in the note set, so that the target melody notes are used for audio superposition, and the configuration flexibility of the game scene music is further improved.
Optionally, the game audio processing method in the embodiment of the present application further includes: acquiring note modes to be adopted by a note set based on a main and string creation mode of the original scene audio; with the note adjustment, a plurality of alternative melody notes contained in the note set are determined.
Specifically, when the main and string creation modes of the background music audio are main and string creation based on the a-small key, the note mode adopted by the audio in the instant generation is the a-small key, and all the key inner tones in two octaves in the a-small key are selected to be adopted as a plurality of alternative melody notes in the note set, so that harmony can be continuously realized on longitudinal listening feeling, which is the characteristic of the music key inner tones, and further, the game immersion feeling of a game player is ensured.
Fig. 3 is an example staff of a longitudinal design according to one embodiment of the present application, as shown in fig. 3, the first behavior a minor-tune the natural tones in all the tunes within two octaves, the a minor natural musical scale at the music level interpreted as two octaves, the second behavior a minor major and chord chords. When background music audio is composed based on the major and minor strings of the a-major key, if notes within all a-major key notes are selected as a plurality of alternative melody notes contained in the note set, then harmony is sustained in longitudinal listening.
Optionally, determining, by the note-style, a plurality of alternative melody notes included in the note set includes: acquiring all melody notes contained in the note mode; and eliminating preset melody notes from all melody notes to obtain a plurality of alternative melody notes, wherein the preset melody notes refer to the melody notes to be eliminated, which are determined in advance according to musical scale.
Specifically, all melody notes contained in the a-measure are obtained, and then B and F notes are removed from all melody notes contained in the a-measure, the rest note set forms a five-tone mode in ancient China, and in the acoustic theory, the audio of the five-tone mode easily forms a harmonious listening feel, so that the listening feel of the note set designed in the way is harmonious with high probability during transverse playing.
Fig. 4 is a transversal design example staff according to one embodiment of the application, as shown in fig. 4, on the basis of the inner tone of the a-key, B-tone and F-tone are removed, namely, the notes at the circle in the graph are removed, and the rest note set after removal forms a chinese ancient five-tone mode, so that the designed note set has a high hearing probability harmony during transversal play.
It should be noted that the embodiment of the present application is merely an example implementation on an a-small scale natural scale, where the note rejection design needs to be determined by an audio designer according to scales of different tunings, and the embodiment of the present application is not limited.
Based on the above-mentioned alternative embodiment, by obtaining all melody notes included in the note mode, and then removing preset melody notes from all melody notes, a plurality of alternative melody notes are obtained, so that the horizontal listening feeling can be continuously harmonious, and further the game immersion feeling of the game player is ensured.
Optionally, the game audio processing method in the embodiment of the present application further includes: acquiring preset behavior audio triggered by the movement behavior of the virtual game character at a target moment in response to the movement behavior of the virtual game character not meeting a preset triggering condition; and playing the preset behavior audio at the target moment.
Specifically, when the movement behavior is not located in a specific game map or a game land block in the virtual game scene at the target moment, determining that the movement behavior of the virtual game character does not meet a preset triggering condition at the target moment, further acquiring preset behavior audio triggered by the movement behavior of the virtual game character at the target moment, wherein the preset behavior audio can be the step sound audio corresponding to the movement behavior, and playing the step sound audio at the target moment.
Optionally, the game audio processing method in the embodiment of the present application further includes: sound events and a set of notes associated with the sound events are created based on the audio middleware.
In particular, the FMOD Designer in the audio middleware can be utilized to create sound events and note sets associated with the sound events to achieve flexible configuration of complex sound events and game scene music.
Optionally, in step S23, the playing of the target melody note and the original scene audio in a superimposed manner includes: and calling a sound event through the audio middleware, and performing superposition playing on the target melody notes and the original scene audio.
Specifically, fig. 5 is a schematic diagram of a game audio superposition playing according to an embodiment of the present application, as shown in fig. 5, by calling a sound event through an audio middleware, a target melody note and a background music audio are superposed and played, so that real-time interactive music can be synthesized in a virtual game scene, the game audio is associated with a movement behavior of a virtual game character, and the configuration flexibility of the game scene music is further increased.
Optionally, invoking the sound event through the audio middleware, and playing the target melody note and the original scene audio in a superimposed manner includes: and calling a sound event through the audio middleware, and performing superposition playing on the target melody notes, the original scene audio and the preset behavior audio triggered by the mobile behavior.
Specifically, fig. 6 is a schematic diagram of a game audio superposition playing according to another embodiment of the present application, as shown in fig. 6, by calling a sound event through an audio middleware, and performing superposition playing on a target melody note, a background music audio and a footstep sound audio, so that real-time interactive music can be synthesized in a virtual game scene, the game audio is associated with a movement behavior of a virtual game character, and configuration flexibility of the game scene music is further increased.
Optionally, the playing attribute of the target melody note is determined by the movement attribute of the movement behavior.
The play attribute may include a play speed, and the movement attribute may include a movement speed, a step frequency, and the like. In an alternative implementation, the playing speed of the target melody note is determined based on the speed of movement of the virtual game character. For example, the greater the step frequency, i.e., the faster the movement speed, when the virtual game character walks on land, the faster the corresponding target melody note is played; the smaller the step frequency when the virtual game character walks on land, i.e., the slower the moving speed, the slower the playing speed of the corresponding target melody note.
Optionally, providing a graphical user interface through the audio middleware, the graphical user interface displaying content at least partially including a game audio processing scene, creating sound events and note sets associated with the sound events based on the audio middleware comprising: creating a sound event for the movement behavior in response to a first touch operation acting on the graphical user interface; responsive to a second touch operation acting on the graphical user interface, a set of notes associated with the movement behavior is edited based on the sound event.
Specifically, the FMOD Designer provided by the audio middleware may perform the game audio processing method in the embodiment of the present application.
The graphical user interface further includes a first control (or a first touch area), and when a first touch operation acting on the first control (or the first touch area) is detected, a musical event is created for the movement behavior through the audio middleware FMOD designer.
The graphical user interface further comprises a second control (or a second touch area), and when a second touch operation acting on the second control (or the second touch area) is detected, the note set associated with the moving action is edited through the audio middleware FMOD designer.
It should be noted that, the first touch operation and the second touch operation may be operations that a user touches the display screen of the terminal device with a finger and touches the terminal device. The touch operation may include single-point touch, multi-point touch, where the touch operation of each touch point may include clicking, long pressing, heavy pressing, swiping, and the like. The first touch operation and the second touch operation may also be touch operations implemented by an input device such as a mouse or a keyboard.
Optionally, the game audio processing method in the embodiment of the present application is applied to a plurality of different types of three-dimensional games, where each type of three-dimensional game in the plurality of different types of three-dimensional games corresponds to a different game music configuration, and the game music configuration includes: a game map corresponding to each type of three-dimensional game, background music corresponding to the game map, and a note set corresponding to the game map.
The various types of three-dimensional games described above include, but are not limited to, war games, adventure games, racing games, and horror games. Specifically, war games usually have a strenuous combat scene and a large-scale combat map, corresponding game music configurations can include combat sound effects, explosion sound effects and sound effects of troops traveling, background music can select a troop shake track exciting a human heart to increase the strenuous feeling of the game, and a note set can comprise the rhythm of troops and the battle drumming during combat. The adventure game generally has various maps including forests, mountains, deserts and the like, the corresponding game music configuration can comprise sound effects of natural environments such as bird song, wind sound and running water sound, the background music can be music filled with fantasy feeling so as to increase the adventure atmosphere, and the note set can comprise the sound of fantasy musical instruments and the melody of the adventure theme. Racing games typically have various racing scenes and track maps, corresponding game music configurations may include engine booming sounds, tire friction sounds, and racing sounds, background music may be selected to be clear in tempo to increase the tension of the race, and a note set may include the sound of the racing engine and the tempo of the racing car when it is flying. Horror games are usually map of horror and horror plot, the corresponding game music configuration may include various horror effects such as horror effects, screaming sounds and horror background effects, background music may be selected to be trending music to increase horror atmosphere, and note sets may include atmosphere tense effects and melodies of horror subjects.
Based on the above alternative embodiments, different types of three-dimensional games will choose different game music configurations according to their characteristics and atmosphere to enhance the game experience.
Optionally, when the virtual game character repeatedly enters the same region in the virtual game scene, different target melody notes are randomly selected from the note sets corresponding to the movement behaviors respectively.
Specifically, different target melody notes are randomly selected from the note set corresponding to the mobile behavior, and the virtual game character has different music experiences when repeatedly entering the same area, so that the game is more interesting and challenging, a player can experience new content when entering the same area each time, and the play value of the game and long-time game fun are increased. At the same time, the technology can also increase the variability of the game, so that the decisions and actions of players in the game produce different results, thereby improving the depth and playability of the game.
Optionally, when the virtual game character leaves from the first region and enters the second region in the virtual game scene, the target melody note randomly selected for the first region is different from the target melody note randomly selected for the second region.
Specifically, by selecting different target melody notes for different game areas, a unique music atmosphere can be created for each area, so that players feel different emotions and experiences in different scenes, and the immersion of the game is further enhanced.
Fig. 7 is a schematic diagram of a game audio processing method according to an embodiment of the present application, as shown in fig. 7, a preset behavior audio triggered by a movement behavior of a virtual game character at a target moment is obtained, so as to determine whether the virtual game character is located in a preset scene area in a virtual game scene, and determine whether the preset scene area is configured with an original scene audio. If the mobile behavior is not located in the preset scene area in the virtual game scene at the target moment, or the mobile behavior is located in the preset scene area in the virtual game scene at the target moment, but the original scene audio is not played at the target moment, only playing the preset behavior audio; if the mobile behavior is located in a preset scene area in the virtual game scene at the target moment and the original scene audio is played at the target moment, randomly selecting target melody notes from a plurality of alternative melody notes in the note set, calling sound events through an audio middleware, and performing superposition playing on the target melody notes, the original scene audio and the preset behavior audio triggered by the mobile behavior.
Based on the embodiment of the application, the high-frequency mobile behavior binding music interaction of the virtual game character is started, and the virtual game character and the game background music in the original scene are combined into a new music aggregate, so that a brand new and more real-time auditory feedback is given to a player. In addition, the target melody notes are selected from a plurality of alternative melody notes in the note set at random, so that the music of the same game scene can achieve thousands of people and thousands of faces under the preset big-direction emotion while the harmony and listening feeling of the music are maintained, and the brocade clothing plays the autonomous creativity of players, so that the interest of a playing method is increased. The game audio processing method can be widely applied to the design of narrative and atmosphere scene music, so that the technical effects of improving the configuration flexibility of game scene music and improving the immersion of games are achieved.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method described in the embodiments of the present application.
In this embodiment, a game audio processing device is further provided, and the device is used to implement the foregoing embodiments and preferred embodiments, and will not be described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 8 is a block diagram of a game audio processing device according to one embodiment of the present application, as shown in fig. 8, the device includes:
a control module 801 for controlling movement of the virtual game character in the virtual game scene in response to a movement control operation of the virtual game character;
the obtaining module 802 is configured to obtain, during movement of the virtual game character, a target melody note triggered by the movement behavior of the virtual game character, where the target melody note is randomly selected from a note set corresponding to the movement behavior, and the note set is determined based on an original scene audio adapted in the virtual game scene;
and the processing module 803 is used for performing overlapped play on the target melody notes and the original scene audio.
Optionally, the obtaining module 802 is further configured to: and acquiring a target melody note triggered by the movement behavior of the virtual game character at a target moment, wherein the target moment is a moment determined according to a preset time interval in the game interaction process of the movement behavior and the virtual model in the virtual game scene.
Optionally, the obtaining module 802 is further configured to: and responding to the movement behavior of the virtual game character to meet a preset trigger condition, selecting a target melody note from a plurality of alternative melody notes contained in a note set, wherein the preset trigger condition is used for determining whether the virtual game character is positioned in a preset scene area in a virtual game scene and whether the preset scene area is configured with original scene audio, an audio superposition function is in an on state in the preset scene area, and the plurality of alternative melody notes are divided in units of notes and are grouped according to a preset scale range.
Optionally, the obtaining module 802 is further configured to obtain note modes to be adopted by the note set based on the primary and secondary string creation modes of the original scene audio; the game audio processing device further includes: a determining module 804 is configured to determine, by note adjustment, a plurality of alternative melody notes included in the note set.
Optionally, the determining module 804 is further configured to: acquiring all melody notes contained in the note mode; and eliminating preset melody notes from all melody notes to obtain a plurality of alternative melody notes, wherein the preset melody notes refer to the melody notes to be eliminated, which are determined in advance according to musical scale.
Optionally, the obtaining module 802 is further configured to obtain preset behavior audio triggered by the movement behavior of the virtual game character at the target moment in response to the movement behavior of the virtual game character not meeting the preset triggering condition; the processing module 803 is further configured to play the preset behavior audio at the target time.
Optionally, the game audio processing device further includes: a creation module 805 for creating sound events and a set of notes associated with the sound events based on the audio middleware.
Optionally, the processing module 803 is further configured to: and calling a sound event through the audio middleware, and performing superposition playing on the target melody notes and the original scene audio.
Optionally, the processing module 803 is further configured to: and calling a sound event through the audio middleware, and performing superposition playing on the target melody notes, the original scene audio and the preset behavior audio triggered by the mobile behavior.
Optionally, the playing attribute of the target melody note is determined by the movement attribute of the movement behavior.
Optionally, a graphical user interface is provided through the audio middleware, wherein the content displayed by the graphical user interface at least partially comprises a game audio processing scene, and the creation module 805 is further configured to: creating a sound event for the movement behavior in response to a first touch operation acting on the graphical user interface; responsive to a second touch operation acting on the graphical user interface, a set of notes associated with the movement behavior is edited based on the sound event.
Optionally, the game audio processing method in the embodiment of the present application is applied to a plurality of different types of three-dimensional games, where each type of three-dimensional game in the plurality of different types of three-dimensional games corresponds to a different game music configuration, and the game music configuration includes: a game map corresponding to each type of three-dimensional game, background music corresponding to the game map, and a note set corresponding to the game map.
Optionally, when the virtual game character repeatedly enters the same region in the virtual game scene, different target melody notes are randomly selected from the note sets corresponding to the movement behaviors respectively.
Optionally, when the virtual game character leaves from the first region and enters the second region in the virtual game scene, the target melody note randomly selected for the first region is different from the target melody note randomly selected for the second region.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; alternatively, the above modules may be located in different processors in any combination.
Embodiments of the present application also provide a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described computer-readable storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
Alternatively, in this embodiment, the above-mentioned computer-readable storage medium may be located in any one of the computer terminals in the computer terminal group in the computer network, or in any one of the mobile terminals in the mobile terminal group.
Alternatively, in the present embodiment, the above-described computer-readable storage medium may be configured to store a computer program for performing the steps of:
S1, responding to a movement control operation of a virtual game role, and controlling the virtual game role to move in a virtual game scene;
s2, in the moving process of the virtual game character, obtaining a target melody note triggered by the moving action of the virtual game character, wherein the target melody note is randomly selected from a note set corresponding to the moving action, and the note set is determined based on the original scene audio adapted in the virtual game scene;
and S3, performing superposition playing on the target melody notes and the original scene audio.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: and acquiring a target melody note triggered by the movement behavior of the virtual game character at a target moment, wherein the target moment is a moment determined according to a preset time interval in the game interaction process of the movement behavior and the virtual model in the virtual game scene.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: and responding to the movement behavior of the virtual game character to meet a preset trigger condition, selecting a target melody note from a plurality of alternative melody notes contained in a note set, wherein the preset trigger condition is used for determining whether the virtual game character is positioned in a preset scene area in a virtual game scene and whether the preset scene area is configured with original scene audio, an audio superposition function is in an on state in the preset scene area, and the plurality of alternative melody notes are divided in units of notes and are grouped according to a preset scale range.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: acquiring note modes to be adopted by a note set based on a main and string creation mode of the original scene audio; with the note adjustment, a plurality of alternative melody notes contained in the note set are determined.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: acquiring all melody notes contained in the note mode; and eliminating preset melody notes from all melody notes to obtain a plurality of alternative melody notes, wherein the preset melody notes refer to the melody notes to be eliminated, which are determined in advance according to musical scale.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: acquiring preset behavior audio triggered by the movement behavior of the virtual game character at a target moment in response to the movement behavior of the virtual game character not meeting a preset triggering condition; and playing the preset behavior audio at the target moment.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: sound events and a set of notes associated with the sound events are created based on the audio middleware.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: and calling a sound event through the audio middleware, and performing superposition playing on the target melody notes and the original scene audio.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: and calling a sound event through the audio middleware, and performing superposition playing on the target melody notes, the original scene audio and the preset behavior audio triggered by the mobile behavior.
Optionally, the playing attribute of the target melody note is determined by the movement attribute of the movement behavior.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: creating a sound event for the movement behavior in response to a first touch operation acting on the graphical user interface; responsive to a second touch operation acting on the graphical user interface, a set of notes associated with the movement behavior is edited based on the sound event.
Optionally, the game audio processing method in the embodiment of the present application is applied to a plurality of different types of three-dimensional games, where each type of three-dimensional game in the plurality of different types of three-dimensional games corresponds to a different game music configuration, and the game music configuration includes: a game map corresponding to each type of three-dimensional game, background music corresponding to the game map, and a note set corresponding to the game map.
Optionally, when the virtual game character repeatedly enters the same region in the virtual game scene, different target melody notes are randomly selected from the note sets corresponding to the movement behaviors respectively.
Optionally, when the virtual game character leaves from the first region and enters the second region in the virtual game scene, the target melody note randomly selected for the first region is different from the target melody note randomly selected for the second region.
In the computer readable storage medium of the embodiment, the virtual game character is controlled to move in the virtual game scene by responding to the movement control operation of the virtual game character, so that the target melody notes triggered by the movement behavior of the virtual game character are acquired in the movement process of the virtual game character, and finally the target melody notes and the original scene audio are overlapped and played, thereby achieving the purpose of flexibly configuring the game scene music based on the movement behavior of the virtual game character, further realizing the technical effects of improving the configuration flexibility of the game scene music and improving the game immersion, and further solving the technical problems of low configuration flexibility and poor game immersion of the game scene music caused by setting fixed scene music samples for the game scene in the related technology.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present application may be embodied in the form of a software product, which may be stored in a computer readable storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present application.
In an exemplary embodiment of the present application, a computer-readable storage medium stores thereon a program product capable of implementing the method described above in the present embodiment. In some possible implementations, the various aspects of the embodiments of the present application may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the present application as described in the "exemplary methods" section of the embodiments, when the program product is run on the terminal device.
A program product for implementing the above method according to an embodiment of the present application may employ a portable compact disc read-only memory (CD-ROM) and comprise program code and may be run on a terminal device, such as a personal computer. However, the program product of the embodiments of the present application is not limited thereto, and in the embodiments of the present application, the computer-readable storage medium may be any tangible medium that can contain, or store the program for use by or in connection with the instruction execution system, apparatus, or device.
Any combination of one or more computer readable media may be employed by the program product described above. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that the program code embodied on the computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Embodiments of the present application also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, responding to a movement control operation of a virtual game role, and controlling the virtual game role to move in a virtual game scene;
s2, in the moving process of the virtual game character, obtaining a target melody note triggered by the moving action of the virtual game character, wherein the target melody note is randomly selected from a note set corresponding to the moving action, and the note set is determined based on the original scene audio adapted in the virtual game scene;
And S3, performing superposition playing on the target melody notes and the original scene audio.
Optionally, the above processor may be further configured to perform the following steps by a computer program: and acquiring a target melody note triggered by the movement behavior of the virtual game character at a target moment, wherein the target moment is a moment determined according to a preset time interval in the game interaction process of the movement behavior and the virtual model in the virtual game scene.
Optionally, the above processor may be further configured to perform the following steps by a computer program: and responding to the movement behavior of the virtual game character to meet a preset triggering condition, and selecting a target melody note from a plurality of alternative melody notes contained in the note set, wherein the preset triggering condition is used for determining whether the virtual game character is positioned in a preset scene area in the virtual game scene and whether the preset scene area is configured with original scene audio.
Optionally, the above processor may be further configured to perform the following steps by a computer program: acquiring note modes to be adopted by a note set based on a main and string creation mode of the original scene audio; with the note adjustment, a plurality of alternative melody notes contained in the note set are determined.
Optionally, the above processor may be further configured to perform the following steps by a computer program: acquiring all melody notes contained in the note mode; and eliminating preset melody notes from all melody notes to obtain a plurality of alternative melody notes, wherein the preset melody notes refer to the melody notes to be eliminated, which are determined in advance according to musical scale.
Optionally, the above processor may be further configured to perform the following steps by a computer program: acquiring preset behavior audio triggered by the movement behavior of the virtual game character at a target moment in response to the movement behavior of the virtual game character not meeting a preset triggering condition; and playing the preset behavior audio at the target moment.
Optionally, the above processor may be further configured to perform the following steps by a computer program: sound events and a set of notes associated with the sound events are created based on the audio middleware.
Optionally, the above processor may be further configured to perform the following steps by a computer program: and calling a sound event through the audio middleware, and performing superposition playing on the target melody notes and the original scene audio.
Optionally, the above processor may be further configured to perform the following steps by a computer program: and calling a sound event through the audio middleware, and performing superposition playing on the target melody notes, the original scene audio and the preset behavior audio triggered by the mobile behavior.
Optionally, the playing attribute of the target melody note is determined by the movement attribute of the movement behavior.
Optionally, the above processor may be further configured to perform the following steps by a computer program: creating a sound event for the movement behavior in response to a first touch operation acting on the graphical user interface; responsive to a second touch operation acting on the graphical user interface, a set of notes associated with the movement behavior is edited based on the sound event.
Optionally, the game audio processing method in the embodiment of the present application is applied to a plurality of different types of three-dimensional games, where each type of three-dimensional game in the plurality of different types of three-dimensional games corresponds to a different game music configuration, and the game music configuration includes: a game map corresponding to each type of three-dimensional game, background music corresponding to the game map, and a note set corresponding to the game map.
Optionally, when the virtual game character repeatedly enters the same region in the virtual game scene, different target melody notes are randomly selected from the note sets corresponding to the movement behaviors respectively.
Optionally, when the virtual game character leaves from the first region and enters the second region in the virtual game scene, the target melody note randomly selected for the first region is different from the target melody note randomly selected for the second region.
In the electronic device of the embodiment, the virtual game role is controlled to move in the virtual game scene by responding to the movement control operation of the virtual game role, so that the target melody notes triggered by the movement behavior of the virtual game role are acquired in the movement process of the virtual game role, and finally the target melody notes and the original scene audio are overlapped and played, thereby achieving the purpose of flexibly configuring the game scene music based on the movement behavior of the virtual game role, further realizing the technical effects of improving the configuration flexibility of the game scene music and improving the game immersion, and further solving the technical problems of low configuration flexibility and poor game immersion of the game scene music caused by setting fixed scene music samples for the game scene in the related technology.
Fig. 9 is a schematic diagram of an electronic device according to an embodiment of the present application. As shown in fig. 9, the electronic device 900 is only an example, and should not impose any limitation on the functions and scope of use of the embodiments of the present application.
As shown in fig. 9, the electronic apparatus 900 is embodied in the form of a general purpose computing device. Components of electronic device 900 may include, but are not limited to: the at least one processor 910, the at least one memory 920, a bus 930 that connects the different system components (including the memory 920 and the processor 910), and a display 940.
Wherein the above-mentioned memory 920 stores program code that can be executed by the processor 910, such that the processor 910 performs the steps according to various exemplary implementations of the present application described in the above-mentioned method section of the present application embodiment.
The memory 920 may include readable media in the form of volatile memory units, such as Random Access Memory (RAM) 9201 and/or cache memory 9202, and may further include Read Only Memory (ROM) 9203, and may also include nonvolatile memory such as one or more magnetic storage devices, flash memory, or other nonvolatile solid state memory.
In some examples, memory 920 may also include a program/utility 9204 having a set (at least one) of program modules 9205, such program modules 9205 include, but are not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Memory 920 may further include memory located remotely from processor 910, which may be connected to electronic device 900 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The bus 930 may be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processor 910, or a local bus using any of a variety of bus architectures.
Display 940 may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of electronic device 900.
Optionally, the electronic apparatus 900 may also communicate with one or more external devices 1000 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic apparatus 900, and/or with any device (e.g., router, modem, etc.) that enables the electronic apparatus 900 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 950. Also, electronic device 900 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 960. As shown in fig. 9, the network adapter 960 communicates with other modules of the electronic device 900 over the bus 930. It should be appreciated that although not shown in fig. 9, other hardware and/or software modules may be used in connection with electronic device 900, which may include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The electronic device 900 may further include: a keyboard, a cursor control device (e.g., a mouse), an input/output interface (I/O interface), a network interface, a power supply, and/or a camera.
It will be appreciated by those skilled in the art that the configuration shown in fig. 9 is merely illustrative and is not intended to limit the configuration of the electronic device. For example, the electronic device 900 may also include more or fewer components than shown in fig. 9, or have a different configuration than shown in fig. 1. The memory 920 may be used to store a computer program and corresponding data, such as a computer program and corresponding data corresponding to a game audio processing method in an embodiment of the present application. The processor 910 executes various functional applications and data processing by executing a computer program stored in the memory 920, i.e., implements the game audio processing method described above.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application and are intended to be comprehended within the scope of the present application.

Claims (17)

1. A game audio processing method, comprising:
responding to a movement control operation of the virtual game role, and controlling the virtual game role to move in a virtual game scene;
in the moving process of the virtual game character, obtaining a target melody note triggered by the moving action of the virtual game character, wherein the target melody note is randomly selected from a note set corresponding to the moving action, and the note set is determined based on the original scene audio adapted in the virtual game scene;
and performing superposition playing on the target melody notes and the original scene audio.
2. The game audio processing method according to claim 1, wherein acquiring the target melody note triggered by the movement behavior of the virtual game character during the movement of the virtual game character comprises:
And acquiring the target melody notes triggered by the movement behavior of the virtual game character at a target moment, wherein the target moment is a moment determined according to a preset time interval in the game interaction process of the movement behavior and a virtual model in the virtual game scene.
3. The game audio processing method according to claim 1, wherein acquiring a target melody note triggered by a movement behavior of the virtual game character during the movement of the virtual game character comprises:
and responding to the movement behavior of the virtual game character to meet a preset triggering condition, and selecting the target melody notes from a plurality of alternative melody notes contained in the note set, wherein the preset triggering condition is used for determining whether the virtual game character is located in a preset scene area in the virtual game scene and whether the preset scene area is configured with the original scene audio.
4. The game audio processing method according to claim 1, characterized in that the game audio processing method further comprises:
acquiring note modes to be adopted by the note set based on main and string creation modes of the original scene audio;
And determining a plurality of alternative melody notes contained in the note set through the note mode.
5. The game audio processing method according to claim 4, wherein determining the plurality of alternative melody notes included in the note set by the note adjustment includes:
acquiring all melody notes contained in the note mode;
and eliminating preset melody notes from all the melody notes to obtain the plurality of alternative melody notes, wherein the preset melody notes refer to melody notes to be eliminated, which are determined in advance according to musical scale tonality.
6. A game audio processing method according to claim 3, characterized in that the game audio processing method further comprises:
acquiring preset behavior audio triggered by the movement behavior of the virtual game character at a target moment in response to the movement behavior of the virtual game character not meeting the preset triggering condition;
and playing the preset behavior audio at the target moment.
7. The game audio processing method according to claim 1, characterized in that the game audio processing method further comprises:
a sound event is created based on the audio middleware and the set of notes associated with the sound event.
8. The game audio processing method according to claim 7, wherein the superimposed playing of the target melody notes and the original scene audio includes:
and calling the sound event through the audio middleware, and performing superposition playing on the target melody notes and the original scene audio.
9. The game audio processing method according to claim 8, wherein invoking the sound event through the audio middleware to play the target melody note with the original scene audio in a superimposed manner comprises:
and calling the sound event through the audio middleware, and performing superposition playing on the target melody notes, the original scene audio and the preset behavior audio triggered by the movement behavior.
10. The game audio processing method according to claim 8, wherein the play attribute of the target melody note is determined by the movement attribute of the movement behavior.
11. The method of claim 7, wherein providing a graphical user interface through the audio middleware, the graphical user interface displaying content that at least partially contains a game audio processing scene, creating the sound event and the set of notes associated with the sound event based on the audio middleware comprises:
Creating the sound event for the mobile behavior in response to a first touch operation acting on the graphical user interface;
responsive to a second touch operation acting on the graphical user interface, editing the set of notes associated with the movement behavior based on the sound event.
12. The game audio processing method according to claim 1, wherein the game audio processing method is applied to a plurality of different types of three-dimensional games, each type of three-dimensional game in the plurality of different types of three-dimensional games corresponding to a different game music configuration, respectively, wherein the game music configuration comprises: a game map corresponding to each type of three-dimensional game, background music corresponding to the game map and a note set corresponding to the game map.
13. The game audio processing method according to claim 1, wherein the virtual game character randomly selects different target melody notes from the note set corresponding to the moving behavior, respectively, when repeatedly entering the same area in the virtual game scene.
14. The game audio processing method according to claim 1, wherein the target melody note randomly selected for the first region is different from the target melody note randomly selected for the second region when the virtual game character leaves from the first region and enters into the second region in the virtual game scene.
15. A game audio processing device, comprising:
a control module for controlling the virtual game character to move in the virtual game scene in response to a movement control operation of the virtual game character;
the acquisition module is used for acquiring target melody notes triggered by the movement behaviors of the virtual game characters in the movement process of the virtual game characters, wherein the target melody notes are randomly selected from note sets corresponding to the movement behaviors, and the note sets are determined based on the original scene audio adapted in the virtual game scene;
and the processing module is used for carrying out superposition playing on the target melody notes and the original scene audio.
16. A computer readable storage medium, characterized in that a computer program is stored in the computer readable storage medium, wherein the computer program is arranged to perform the game audio processing method as claimed in any one of claims 1 to 14 when being executed by a processor.
17. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the game audio processing method of any of claims 1 to 14.
CN202311369882.4A 2023-10-20 2023-10-20 Game audio processing method and device, storage medium and electronic device Pending CN117563223A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311369882.4A CN117563223A (en) 2023-10-20 2023-10-20 Game audio processing method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311369882.4A CN117563223A (en) 2023-10-20 2023-10-20 Game audio processing method and device, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN117563223A true CN117563223A (en) 2024-02-20

Family

ID=89892509

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311369882.4A Pending CN117563223A (en) 2023-10-20 2023-10-20 Game audio processing method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN117563223A (en)

Similar Documents

Publication Publication Date Title
Togelius et al. Procedural content generation: Goals, challenges and actionable steps
Liapis et al. Orchestrating game generation
Collins An introduction to procedural music in video games
US20070163427A1 (en) Systems and methods for generating video game content
Summers Playing the tune: Video game music, gamers, and genre
Smith Expressive design tools: Procedural content generation for game designers
Grimshaw Sound
WO2024108925A1 (en) Method and apparatus for processing audio information in game, and storage medium
US8382589B2 (en) Musical action response system
CN117563223A (en) Game audio processing method and device, storage medium and electronic device
CN115624751A (en) Game audio processing method and device, storage medium and electronic device
de Lima Costa et al. Songverse: a digital musical instrument based on virtual reality
Summers The Queerness of Video Game Music
Mitchusson Indeterminate Sample Sequencing in Virtual Reality
Plut The Audience of the Singular
Junius et al. Playing with the strings: designing puppitor as an acting interface for digital games
McAlpine et al. Approaches to creating real-time adaptive music in interactive entertainment: A musical perspective
Junius Puppitor: Building an Acting Interface for Videogames
Léo The Effect of Dynamic Music in Video Games: An Overview of Current Research
Cutajar Automatic Generation of Dynamic Musical Transitions in Computer Games
Austin et al. Sample, Cycle, Sync: The Music Sequencer and Its Influence on Music Video Games
WO2023168990A1 (en) Performance recording method and apparatus in virtual scene, device, storage medium, and program product
Heino Adaptive music as a narrative device in video games
Davies et al. Towards a more versatile dynamic-music for video games: approaches to compositional considerations and techniques for continuous music
Hana Evaluating Compositional Design in Video Game Music: An analysis of Ape Out and Untitled Goose Game

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination