WO2024029572A1 - Game system, and game program and control method for game system - Google Patents

Game system, and game program and control method for game system Download PDF

Info

Publication number
WO2024029572A1
WO2024029572A1 PCT/JP2023/028307 JP2023028307W WO2024029572A1 WO 2024029572 A1 WO2024029572 A1 WO 2024029572A1 JP 2023028307 W JP2023028307 W JP 2023028307W WO 2024029572 A1 WO2024029572 A1 WO 2024029572A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound data
game
data
song
output sound
Prior art date
Application number
PCT/JP2023/028307
Other languages
French (fr)
Japanese (ja)
Inventor
明広 石原
暁 中田
康司 山中
義隆 東
Original Assignee
株式会社コナミデジタルエンタテインメント
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社コナミデジタルエンタテインメント filed Critical 株式会社コナミデジタルエンタテインメント
Publication of WO2024029572A1 publication Critical patent/WO2024029572A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/58Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/814Musical performances, e.g. by evaluating the player's ability to follow a notation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/825Fostering virtual characters

Definitions

  • the present invention relates to a game system, a game program for the game system, and a control method for simulating the growth of a game object to be grown and outputting audio according to the state of the game object.
  • Patent Document 1 discloses an audio mixdown device.
  • This audio mixdown device includes an audio file input section, a mixdown control section, an effect processing section, and a synthesized audio data output section.
  • the audio file input unit inputs an audio file recorded using karaoke on a device used by a user
  • the audio file input unit stores the input audio file in a recorded audio file database in association with identification data.
  • the mixdown control unit selects a plurality of recorded audio files to be synthesized from the recorded audio file database in response to a request from a user.
  • the effect processing section performs effect processing on a plurality of recorded audio files and generates synthesized audio data.
  • the synthesized voice data output unit outputs the generated synthesized voice data.
  • a performance in which game objects such as characters appearing in the game sing may be performed, and the song may be output while the game is being played. Further, as such a game, there is a training game in which a user trains a character and improves the parameters of the character.
  • This training game includes, for example, a performance in which the trained character sings.
  • a game system simulates the raising of a game object to be raised, performs an effect in which the game object plays a song or sings a song, and performs an effect in which the game object performs a song or sings a song.
  • a game system that provides a game that outputs a song comprising: a state acquisition means for acquiring state information indicating a state of the game object; and a state obtaining means for obtaining state information indicating a state of the game object; and a state obtaining means for obtaining state information indicating the state of the song or the sound data of the song.
  • the apparatus includes a data acquisition means for acquiring output sound data according to the output sound data, and an audio output control means for outputting sound based on the acquired output sound data.
  • a game program includes a computer, simulates the raising of a game object to be raised, performs an effect in which the game object plays a song or sings a song, and performs a performance.
  • a game program for a game system that provides a game that outputs the song to be played or the song to be sung; It functions as a data acquisition means for acquiring output sound data of the song, which corresponds to the state indicated by the state information, and an audio output control means for outputting a sound based on the acquired output sound data.
  • a control method includes a computer, simulates the raising of a game object to be raised, performs an effect in which the game object plays a song or sings a song, and performs the performance.
  • a control method for a game system that provides a game that outputs the song to be played or the song to be sung, wherein the computer acquires state information indicating a state of the game object, and outputs the sound of the song or the song. output sound data corresponding to the state indicated by the state information is acquired, and sound based on the acquired output sound data is output.
  • FIG. 1 is a schematic block diagram of a game system according to a first embodiment. Schematic diagram showing the proportion of inferior parts. An explanatory diagram of the inferior part and the performance part. Flowchart for acquiring output sound data.
  • FIG. 2 is a schematic block diagram of a game system according to a second embodiment.
  • identification information is data composed of letters, numbers, symbols, images, or a combination thereof.
  • FIG. 1 is a schematic diagram showing the overall configuration of a game system 100.
  • the game system 100 includes a game terminal 10, which is an example of a user terminal, and a server 30.
  • the server 30 is configured as one logical server by combining a plurality of server units 52.
  • the server 30 may be configured by a single server unit 52.
  • the server 30 may be configured logically using cloud computing.
  • the server 30 is configured to be connectable to the network 50.
  • network 50 is configured to utilize the TCP/IP protocol to implement network communications.
  • a local area network LAN connects the server 30 and the Internet 51.
  • the Internet 51 as a WAN and a local area network LAN are connected via a router 53.
  • the game terminal 10 is also configured to be connected to the Internet 51.
  • Servers 30 may be interconnected by a local area network LAN or by the Internet 51.
  • the network 50 may be a leased line, a telephone line, an in-house network, a mobile communication network, another communication line, or a combination thereof, and it does not matter whether it is wired or wireless. .
  • the game terminal 10 is a computer device operated by a user.
  • the game terminal 10 includes a stationary or book-type personal computer 54 and a mobile terminal device 55 such as a mobile phone including a smartphone.
  • the game terminal 10 includes various computer devices such as a stationary home game device, a portable game device, a portable tablet terminal device, and an arcade game machine.
  • the game terminal 10 can allow the user to enjoy various services provided by the server 30. Note that, below, an example in which the game terminal 10 is the mobile terminal device 55 will be mainly explained.
  • the server 30 transmits the program and data used for the game to the game terminal 10 via the network 50.
  • the game terminal 10 then stores the received program and data.
  • the game terminal 10 may be configured to read a program or data stored in an information storage medium (not shown). In this case, the game terminal 10 may acquire the program or data via an information storage medium.
  • the user can play various games on the game terminal 10.
  • the game includes elements for growing game objects.
  • the games include simulation games that simulate the growth of game objects, competitive trading card games, music games, board games, mahjong games, RPG games, horse racing games, fighting games, puzzle games, quiz games, and These include sports games such as baseball and soccer.
  • a game object is an object that is displayed or used on the game terminal 10.
  • game objects are used in game processing to progress the game, and include characters, cards, effects, equipment, items, and the like.
  • the material object which is a game object serving as a training material
  • the training object is a copy of the character. That is, a plurality of training objects may exist for the same character.
  • the training object may be the character itself, which is a material object. (There are cases where a material object can be duplicated and the breeding object is a material object and/or a copy of a material object.
  • the breeding object is a material object. It is also possible to adopt a configuration in which there is no training object and a predetermined training object is given to the user when the training game is started.
  • the breeding object may be a virtual card or the like corresponding to a character.
  • the game has a training part in which training objects are trained, and the training part is divided into multiple sections. Furthermore, each section is composed of a plurality of turns, and the last turn has a live part in which the breeding object performs live. The number of turns included in each section varies depending on the character that is the material of the training object, the ongoing scenario, etc. Furthermore, in each turn, the nurturing object can be made to perform a predetermined action.
  • an event targeting the training object occurs at an appropriate timing within each turn. For example, the event is an event for playing with friends, an event for a training camp, or the like. Note that although multiple events may occur in one turn, there may be times when no event occurs.
  • a support event will occur with a predetermined probability during the training part.
  • the event object corresponds to, for example, a character that is a material object.
  • an event object is a virtual card on which a character is drawn.
  • a support event that occurs by configuring a deck is associated with an event object.
  • an event object of a singer character is associated with a support event that increases singing ability, and the support event occurs with a predetermined probability in a lesson that increases singing ability.
  • the user trains the training object by having it perform various actions. That is, the nurturing object performs various actions in response to the user's instructions, and as a result, the parameters of the nurturing object change.
  • the actions that the training object performs include lessons, work, rest, going out, going to the hospital, and acquiring skills.
  • the actions taken by the training object will have effects such as changes in parameters associated with the training object, acquisition or loss of abilities, etc., acquisition or use of items, etc., and changes in relationships with other game objects. It's fine if it's an option.
  • the nurturing object may be able to perform training camps, trips, live performances, reporting, photography, appearances, competitions, auditions, going to school, and the like.
  • the parameters of the breeding object are variables linked to object identification information that uniquely identifies the breeding object, and change as the game progresses.
  • the parameters include information indicating the size or height of the ability, information indicating the presence or absence of the ability, and information indicating the state of the game object.
  • the parameter changes as the value of the parameter increases or decreases.
  • the parameters vary depending on whether the flag is turned on or off.
  • the parameters include singing ability, dancing ability, expressive ability, visual ability (for example, values that increase depending on clothing, makeup, hairstyle, etc.), acting ability, performance ability, mental strength, stamina, intelligence, charm, etc. including.
  • consumption elements include skill points, in-game currency, and the like.
  • the points that can be acquired by performing an action may be a value that gives the user an advantage in acquiring skills.
  • the points are values that give an effect of reducing skill points required when acquiring a skill.
  • an event may occur according to a predetermined scenario, such as a drama depicting friendship between game objects.
  • an event that affects the cultivation of the cultivation object may occur at a predetermined probability in each turn.
  • a beneficial effect is an increase in the amount of increase in a parameter due to a lesson, or an increase in a parameter of a training object.
  • the disadvantageous effect is, for example, a decrease in the amount of increase in the parameter or a decrease in the parameter of the nurturing object.
  • the live part functions as a checkpoint to confirm the level of development.
  • the live part will be held during the last turn of each section.
  • a mini-game is played in which the live progresses automatically.
  • a live performance sung by a single breeding object or a unit made up of a plurality of game objects including the breeding object is displayed as a performance video.
  • the live success conditions can be achieved according to the parameters of the grown object.
  • the success condition may be that the singing ability exceeds a predetermined value, or that the number of fans or ticket sales exceeds a predetermined number, or it may be the fact that the live performance itself is held. good.
  • the training can be continued, but if the success conditions are not achieved, the training ends.
  • the live part is not limited to the one held at the last turn of each section as described above, but can be held only at the end of the training part, or in a part different from the training part (i.e., one of the training parts). It may also be held by a club (not a club).
  • one turn progresses each time the training object performs an action. That is, the user causes the training object to perform an action every turn.
  • a training object executes a job selected by the user from among the jobs constituting a work route including a plurality of jobs.
  • the breeding object grows and the parameters associated with the breeding object change.
  • a new job that the training object can perform may be opened.
  • work includes live performances, interviews, filming, appearances, competitions, and auditions.
  • the actions that the training object is made to perform include actions that do not consume turns. For example, even if you perform an action to acquire a skill, a turn does not pass, and you can have the training object perform another action. Note that an action that does not consume a turn may be performed in the turn of the live part before the start of the live performance. Also, in the last turn of each section, a target live performance will be performed as a live part. Note that conditions for holding a live performance may be set separately.
  • the event condition is to obtain a predetermined number of fans, ticket sales, etc.
  • the number of fans or the number of ticket sales increases by having the user perform an action such as work or by the occurrence of an event.
  • the live success rate in the live part increases or decreases depending on the parameters of the training object grown in the training part. In order to have a successful live performance, the user needs to take lessons in the training part to increase parameters.
  • the number of turns until the end of training is arbitrary, but an example is 72 turns, which corresponds to 6 years in the game.
  • the user trains at least one training object in the training part consisting of a plurality of turns.
  • the trained object can be used as an inherited object.
  • an arbitrary character can be used as a training object, and a training object that has been trained before the character can be used as an inherited object.
  • the training object can inherit the parameters, talents, job routes, etc. associated with the inheritance object.
  • the game described above proceeds as follows.
  • the user selects a training object to be trained from among a plurality of game objects to be training materials.
  • an inheritance object that includes an inheritance element to be inherited by the breeding object.
  • a user selects two inherited objects.
  • the number of selectable inherited objects may be one or three or more.
  • the user selects one or more event objects to construct a deck.
  • the user selects six event objects.
  • the number of selectable event objects may be five or less or seven or more.
  • a support event may occur during the training part.
  • the support events may be different from each other, and each has a predetermined influence on the growth of the growth object.
  • the training part begins.
  • the user selects one action from among lessons, work, rest, going to the hospital, and acquiring skills, and causes the training object to perform the action.
  • the breeding object is in a bad state (for example, when it is sick or injured)
  • the user selects the action of going to the hospital in order to resolve the bad state.
  • the condition of the breeding object becomes poor, the user selects the action of going out to make the condition better or better.
  • the effectiveness of the lesson decreases, the amount of increase in the parameter decreases, the increase in the parameter is limited, or the parameter decreases.
  • the physical strength of the training object increases or decreases. Basically, when you do lessons or work, the physical strength of the training object decreases. If the physical strength is lower than a predetermined value, the probability of being injured by performing the lesson increases, the effectiveness of the lesson decreases, or the lesson cannot be selected. Therefore, the user selects a resting action to recover a predetermined amount of physical strength. Note that physical strength may be able to be recovered by the occurrence of an event, the use of a skill or an item, or the like.
  • the user selects a work action to obtain predetermined conditions for holding a live performance.
  • the training object performs the job, the number of fans or the number of ticket sales increases.
  • the user selects an action to acquire skills.
  • the effects exerted by a skill may include an effect that makes it easier to succeed in an event or job, an effect that changes the parameters of a training object, an effect that increases the amount of increase in parameters due to lessons, and the like.
  • a live part will occur as a checkpoint.
  • the training can be continued, and if the success conditions are not achieved, the training ends. Note that a plurality of checkpoints may be provided.
  • the game system 100 provides a game in which a game object plays a song or sings a song, and outputs a song to be played or a song to be sung. Furthermore, in the game, the raising of a game object to be raised is simulated.
  • This game system 100 includes a game terminal 10 and a server 30, as shown in FIG. In the following, an example of a game in which a game object sings a song and outputs the sung song will be mainly described.
  • the game terminal 10 includes a terminal control section 11 as an example of a terminal control means, a terminal storage section 12 as an example of a terminal storage means, a terminal communication section 13 as an example of a terminal communication means, and a terminal operation section as an example of an operation means. 14, a terminal display section 15 as an example of a display means, and an audio output section 16 as an example of an audio output means.
  • the terminal control unit 11 is configured as a computer and includes a processor (not shown). This processor is, for example, a CPU (Central Processing Unit) or an MPU (Micro-Processing Unit). Further, the processor controls the entire game terminal 10 based on the control program and game program stored in the terminal storage unit 12, and also controls various processes in an integrated manner.
  • the terminal storage unit 12 is a computer-readable non-temporary storage medium.
  • the terminal storage unit 12 includes RAM (Random Access Memory), which is a system work memory for the processor to operate, ROM (Read Only Memory), HDD (Hard Disc Drive), and HDD (Hard Disc Drive) that store programs and system software. Includes storage devices such as SSD (Solid State Drive).
  • the CPU of the terminal control unit 11 executes processing operations such as various calculations, controls, and determinations according to a control program stored in the ROM or HDD of the terminal storage unit 12.
  • the terminal control unit 11 uses a portable recording medium such as a CD (Compact Disc), a DVD (Digital Versatile Disc), a CF (Compact Flash) card, and a USB (Universal Serial Bus) memory, or a server on the Internet. Control can also be performed according to a control program stored in an external storage medium such as .
  • a portable recording medium such as a CD (Compact Disc), a DVD (Digital Versatile Disc), a CF (Compact Flash) card, and a USB (Universal Serial Bus) memory
  • Control can also be performed according to a control program stored in an external storage medium such as .
  • the terminal storage unit 12 stores a terminal program PG, which is an example of a game program, object data 12A, and terminal audio data 12B.
  • the object data 12A includes, as game object data, a character image, parameter values, information indicating the state of the object, etc., which are associated with object identification information that uniquely identifies the object.
  • the terminal audio data 12B includes the character's voice and sound data related to singing.
  • the terminal audio data 12B is waveform data in a predetermined format such as WAV format.
  • the terminal storage unit 12 stores data (not shown) necessary for game processing to advance the game, such as game images and game music.
  • the terminal program PG includes a terminal control unit 11 as a computer, a status acquisition unit 11A which is an example of a status acquisition unit, a data acquisition unit 11B which is an example of a data acquisition unit, and a game progress unit 11C which is an example of an audio output control unit. , and a generation unit 11D which is an example of a generation means. That is, the terminal control section 11 has each section as a logical device realized by a combination of hardware and software. Alternatively, the terminal program PG can be stored in other computer-readable non-transitory storage medium in addition to the terminal storage unit 12.
  • the terminal operation unit 14 is an input device through which the user inputs game operations.
  • the terminal display unit 15 is a device that displays game images, and is, for example, a liquid crystal display or an organic EL display.
  • the audio output unit 16 is an output device that outputs game music and the like, and is, for example, a speaker or headphones. Note that in FIG. 3, the terminal operation section 14 and the terminal display section 15 are shown separately. However, the terminal operation section 14 and the terminal display section 15 may be integrally configured as a touch panel. Further, the terminal operating section 14 may include a touch pad, a pointing device such as a mouse, a button, a key, a lever, a stick, etc. that are not integrated with the terminal display section 15. Further, the terminal operation unit 14 may be a device that detects the voice emitted by the user or the user's motion, and performs an operation according to the detection result.
  • audio is output based on output sound data according to the state of the breeding object. For example, in the early stages of the game before the singing ability, which is a parameter, increases, the audio output unit 16 outputs audio based on the output sound data of a poor singer. On the other hand, in the final stage of the game after the singing ability has increased, the audio output unit 16 outputs audio based on the output sound data of the singing skill. Thereby, the user can audibly sense the growth of the nurturing object.
  • the state of the training object changes from a poor state to a normal state to an excellent state.
  • a poor state a poor song is output with a narrow vocal range and a different pitch of high or low notes from the original song.
  • a poor song may be output in which the pitch or pitch is different from the original song and the voice is raised or the pitch is low and flat.
  • a poor song may be output in which the beginning of the song or the solo singing part is faster or slower than the original song.
  • a poorly written song may be output in which the pitch is significantly different from the original song in a part where the pitch changes greatly or in a part where the pitch changes stepwise. That is, as an example, the inferior state may be a state in which at least some of the musical elements are inferior compared to the original song.
  • a normal song with a slightly wider vocal range and a slightly different pitch of the highest or lowest note from the original song is output.
  • the pitch or pitch is different from the original song, and the voice is raised or a normal song with a slightly lower and flat pitch is output.
  • a normal song may be output in which the beginning of the song or the solo singing part is slightly faster or slower than the original song.
  • a normal song may be output that has a slightly different pitch from the original song in a part where the pitch changes greatly or in a part where the pitch changes stepwise. That is, as an example, the normal state may be a state in which the musical elements are comparable to those of the original song.
  • a good song with a wide vocal range and the same waveform as the original song except for the production part is output.
  • a good song may be output in which the pitch, pitch, timing, etc. of the song are the same as the original song except for the production part.
  • a good song may be output that includes a performance part that reflects the singing technique. That is, as an example, the superior state may be a state in which at least some of the musical elements are superior compared to the original song.
  • the output sound data is sound data corresponding to the song outputted from the audio output section 16, and is used to finally output the song from the audio output section 16.
  • the output sound data is selected from among multiple types of sound data included in the terminal audio data 12B stored in the terminal storage unit 12.
  • multiple types of sound data may be stored in the server storage unit 32 of the server 30. Further, the output sound data may be generated each time in the game terminal 10 or the server 30 as necessary.
  • multiple types of sound data include first sound data that corresponds to an inferior state and has many inferior parts, second sound data that corresponds to a normal state and has few inferior parts, and second sound data that corresponds to an excellent state. It contains the third tone data, which does not have inferior parts and includes a directed part. These first to third sound data are generated as candidates for output sound data. Moreover, the first to third tone data are based on the same song. Therefore, at least a portion (for example, a portion of bars) of the first to third tone data has the same waveform.
  • the inferior parts of the first note data and the second note data have different pitches compared to the same lyrics part of the third note data.
  • the sound output using the inferior part of the lyric "a” is a degraded sound that gives the user a sense of discomfort compared to the part of the lyric "a" in the reference sound data.
  • degraded sounds include sounds with a high or low pitch, sounds with early or late timing, sounds with incorrect or skipped lyrics, sounds with a low or loud voice, sounds with a raised voice, or The sound is hoarse, etc.
  • the production part of the third note data has a production that reflects the singing technique.
  • the performance portion includes a performance that reflects a singing technique such as vibrato.
  • the performance portion may include a "staccato”, “shakuri”, “fall”, “fist”, or the like.
  • the output sound data may include reference sound data corresponding to a song sung according to the musical score, inferior sound data that includes an inferior part that is inferior compared to the reference sound data, and inferior sound data that is inferior to the reference sound data. It may be generated using superior sound data including superior performance parts.
  • the output sound data is generated by mixing at least two types of reference sound data, inferior sound data, and superior sound data.
  • the first sound data, the second sound data, or the third sound data are used as output sound data.
  • the first to third note data are generated based on the generation data and the musical score data.
  • the first to third tone data are generated based on the inferior tone data, the reference tone data, and the superior tone data.
  • the inferior tone data, reference tone data, and superior tone data are all generated based on generation data and musical score data.
  • inferior tone data, reference tone data, and superior tone data are used as first to third tone data, respectively.
  • the plurality of types of sound data may be inferior sound data, reference sound data, and superior sound data.
  • the first tone data corresponding to the inferior condition becomes the inferior tone data
  • the second tone data corresponding to the normal condition becomes the reference tone data
  • the third tone data corresponding to the superior condition becomes the superior tone data.
  • the plurality of types of sound data may be divided into a plurality of stages, such that as the parameter becomes higher, the proportion of inferior parts is lowered.
  • each of the plurality of types of sound data may be generated for each character that is a game object.
  • the generation unit 11D generates output sound data.
  • the generation unit 11D generates output sound data using generation data that includes singing characteristics of a human performer (for example, an idol or a voice actor) who plays the character of the game object.
  • This generation data is created from original data of a plurality of (for example, three) songs sung by a performer.
  • output sound data of the desired song is generated using the generation data and the musical score data of the desired song by voice creation software that is AI (Artificial Intelligence) created by machine learning. Therefore, it is also possible to generate output sound data of a song different from the recorded song.
  • AI Artificial Intelligence
  • the generation data is used to reproduce the characteristics of the performer's singing (for example, the timing of breaths, the degree of pitch deviation, etc.). Therefore, when output sound data is generated using different generation data, songs with different characteristics, ie, output sound data, are generated even if the songs use the same score data.
  • the generation unit 11D generates output sound data according to the state indicated by the state information (for example, parameters) of the breeding object acquired by the state acquisition unit 11A.
  • the terminal storage unit 12 stores generation data and voice creation software.
  • the audio creation software is downloaded from the server 30 in advance.
  • the generation data is generated in the server 30 and stored in the server storage unit 32, and is transmitted from the server 30 in response to a download request from the game terminal 10.
  • the server 30 may transmit the generation data in advance in response to a request from the game terminal 10.
  • the terminal storage unit 12 may also store voice creation software that has learned the characteristics of the performer's singing indicated by the generation data.
  • the generation unit 11D generates output sound data in real time before and during the live performance.
  • the generation unit 11D may generate the output sound data at the timing when the status acquisition unit 11A acquires the status information.
  • the generation unit 11D when generating output sound data corresponding to a normal state, the generation unit 11D generates output sound data with few or no inferior parts.
  • the generation unit 11D when generating output sound data corresponding to an excellent state, the generation unit 11D generates output sound data that does not have inferior parts and includes a presentation part.
  • the generation unit 11D when generating output sound data corresponding to an inferior state, the generation unit 11D generates output sound data that has many inferior parts.
  • the proportion of the inferior portion may be increased or decreased continuously or stepwise depending on the parameter. Further, the proportion of the inferior part may be determined according to a table in which the proportion of the inferior part is defined according to the parameter.
  • the amount of data downloaded from the server 30 can be reduced. Furthermore, since the proportion of inferior parts can be finely changed according to the parameters, the degree of expression of changes in proficiency level can be improved. Furthermore, even if the parameters of the training object change due to user operations (for example, use of an item or activation of a skill) during a live performance, output sound data can be generated with the proportion of inferior parts changed according to the changed parameters. In addition, output sound data including a necessary proportion of inferior parts can be generated without generating many types of output sound data.
  • Output sound data may be generated according to this parameter.
  • the generation unit 11D generates the output sound data so that the inferior parts are small in the beginning of the live performance when the stamina is high, and the inferior parts are large in the final stage of the live performance when the stamina is low.
  • the generation unit 11D may generate the output sound data so that the sense of elation increases during the exciting period of a live performance, such as the chorus of a song, and the inferior portions are reduced during this period.
  • the generation unit 11D may generate the output sound data so that inferior parts are more numerous in parts corresponding to a period of high tension at the beginning of a live performance, such as the beginning of a song. Alternatively, the generation unit 11D may determine the amount of the inferior portion according to a combination of two or more of these dynamic parameters, and generate the output sound data.
  • the dynamic parameter may be other parameters such as singing ability. Alternatively, the parameters may change during the live performance due to the use of an item or the activation of a skill. Further, the dynamic parameter of the trained character may be a type of parameter that changes due to training, or may be a value that does not change due to training but changes during a live performance.
  • the generation unit 11D may generate output sound data based on inferior sound data or the like. Specifically, the generation unit 11D generates inferior sound data, which is an example of the first sound data that includes an inferior part at least in part, and the second sound data, which has a smaller proportion of the inferior part compared to the first sound data, or Output sound data (for example, first to third sound data) is generated by mixing with reference sound data that is an example of at least one other sound data that does not include the sound data.
  • the inferior part has a different sound output timing or pitch compared to other sound data. This allows output sound data including inferior parts to be generated through mixing, thereby reducing the load on the generation process.
  • the generation unit 11D can further use superior sound data for mixing as other sound data.
  • the inferior tone data, the reference tone data, and the superior tone data are generated in the server 30 using generation data, and are stored in the server storage unit 32. These sound data are then transmitted from the server 30 in response to a download request from the game terminal 10.
  • the server 30 may transmit the inferior tone data, the reference tone data, and the superior tone data in advance in response to a request from the game terminal 10.
  • the usage ratio of each of the inferior tone data, the reference tone data, and the superior tone data may be increased or decreased depending on the parameters. Furthermore, the usage ratio of these data may be determined in advance for each parameter.
  • FIG. 4A is a state before the singing ability increases, and is a schematic diagram showing the proportion of inferior parts in the output sound data acquired mainly at the beginning of the training part.
  • FIG. 4B is a schematic diagram showing the proportion of inferior parts in the output sound data obtained mainly in the middle of the training part when the singing ability has increased to some extent.
  • FIG. 4C is a schematic diagram showing the proportion of inferior parts in the output sound data obtained mainly at the end of the training part in a state where the singing ability has increased.
  • FIG. 5A is a schematic diagram showing honor tone data corresponding to a song including a performance part.
  • FIG. 5B is a schematic diagram showing reference tone data corresponding to a song according to the musical score.
  • FIG. 5C is a schematic diagram showing inferior tone data corresponding to a song including an inferior part.
  • the generation unit 11D generates the output sound data of FIG. 4A (for example, the first sound data) including the inferior part using the reference sound data and the inferior sound data.
  • reference sound data is used for the lyrics "Ue” and "Kikukeko".
  • Inferior tone data is used for other parts.
  • the inferior tone data includes a part B1 where the pitch is shifted to a higher side, a part B2 where the pitch is shifted to a lower side, and a part B2 where the pitch is shifted to a higher side and the timing is different.
  • the slow part B3 is included as an inferior part.
  • the generation unit 11D uses the inferior sound data for the part of the lyrics "Ai". As a result, output sound data is generated that includes, as an inferior part, a part B1 with a higher pitch in the part of the lyrics "Ai”. Furthermore, the generation unit 11D uses inferior sound data for the part of the lyrics "oka”. As a result, output sound data is generated that includes, as an inferior part, a part B2 with a lower pitch in the part of the lyric "Oka”. Furthermore, the generation unit 11D uses inferior sound data for the lyrics "Sasisuseseso". As a result, output sound data is generated that includes a portion B3, which is an inferior portion, in which the pitch is shifted toward the higher side and the timing is delayed, in the portion of the lyrics “Sasisuseseso”.
  • the generation unit 11D generates the output sound data of FIG. 4B (for example, second sound data), which has few inferior parts, using superior sound data in addition to the reference sound data and inferior sound data.
  • the superior tone data is used for the lyrics “eo” and “suseso”.
  • the inferior tone data is used for the "ku” and "sa” parts of the lyrics, and the reference tone data is used for the other parts.
  • the superior sound data includes a portion P to which vibrato is applied as a presentation portion.
  • the generation unit 11D uses the superior tone data as the "so" part of the lyrics. As a result, output sound data is generated that includes a portion P where vibrato is applied as a production portion to the “so” portion of the lyrics. Furthermore, the generation unit 11D uses inferior sound data for the "ku” part of the lyrics. As a result, output sound data is generated that includes, as an inferior part, a part B2 with a lower pitch in the part of the lyrics "ku”. Furthermore, the generation unit 11D uses inferior sound data for the "sa" part of the lyrics. As a result, output sound data is generated that includes, as an inferior part, a part B3 in which the pitch is shifted to the higher side and the timing is delayed in the part of the lyric "sa".
  • the generation unit 11D generates the output sound data (for example, third sound data) of FIG. 4C without the inferior part using the reference sound data and the superior sound data.
  • the standard tone data is used for the "ki" part of the lyrics
  • the superior tone data is used for the other parts.
  • the generation unit 11D uses the superior sound data as the "so" part of the lyrics in the output sound data of FIG. 4B.
  • output sound data is generated that includes a portion P where vibrato is applied as a production portion to the “so” portion of the lyrics.
  • the output sound data in FIG. 4A includes inferior parts in the lyrics “ai”, “oka”, and “sashisu seso”.
  • the output sound data of FIG. 4B includes inferior parts in the lyrics “ku” and “sa”, and has relatively few inferior parts compared to the output sound data of FIG. 4A.
  • the output sound data in FIG. 4C does not include any inferior parts, and has relatively fewer inferior parts compared to the output sound data in FIGS. 4A and 4B.
  • the generation unit 11D transmits a download request for inferior tone data, reference tone data, and superior tone data to the server 30 before the start of the live performance. Then, the generation unit 11D mixes the inferior tone data etc. downloaded from the server 30 in real time before the start of the live performance and during the live performance, and outputs sound data according to the state indicated by the state information acquired by the state acquisition unit 11A. generate.
  • the inferior tone data, the reference tone data, and the superior tone data may be downloaded from the server 30 to the game terminal 10 in advance. Then, the generation unit 11D mixes the pre-downloaded inferior tone data, etc. in real time before the start of the live performance and during the live performance, and generates output sound data according to the state indicated by the state information obtained by the state acquisition unit 11A. generate.
  • the inferior tone data, superior tone data, and reference tone data are prepared in advance for each song. Further, when a unit including a plurality of game objects each corresponding to a character sings the same song, inferior tone data, superior tone data, and reference tone data may be prepared for each character.
  • the generation unit 11D may generate inferior tone data, reference tone data, and superior tone data using the generation data.
  • the generation data is generated in the server 30 and stored in the server storage unit 32, and is transmitted from the server 30 in response to a download request from the game terminal 10.
  • the generation unit 11D mixes pre-generated inferior sound data and the like to generate output sound data in accordance with the state indicated by the state information acquired by the state acquisition unit 11A in real time during a live performance.
  • the generation unit 11D when the generation unit 11D generates output sound data in real time during a live performance, the inferior sound data and the reference sound are generated in bar units or note units so that the output sound data is generated in real time during a live performance. It may be mixed to include the data and a part of the superior tone data.
  • the generation unit 11D increases the usage ratio of the inferior tone data, and mainly uses the inferior tone data and switches some measures thereof to a part of the reference tone data.
  • the generation unit 11D when the singing ability parameter is high, the generation unit 11D reduces the usage ratio of the inferior tone data, mainly uses the reference tone data, and switches a part of the measure to a part of the inferior tone data.
  • the generation unit 11D does not generate the output sound data in real time during the live performance, but generates the output sound data based on the generation data and the musical score data before the start of the live performance (for example, at the timing when the status acquisition unit 11A acquires the status information). may be generated.
  • the generation unit 11D includes the generated output sound data in the terminal audio data 12B and stores it in the terminal storage unit 12.
  • the generation unit 11D may include the generated output sound data in the server audio data 32A and store it in the server storage unit 32.
  • the proportion of the inferior portion may be increased or decreased continuously or stepwise depending on the parameter. Further, the proportion of the inferior part may be determined according to a table in which the proportion of the inferior part is defined according to the parameter.
  • the generation unit 11D may generate multiple types of sound data in advance from the generation data as output sound data candidates. For example, the generation unit 11D generates multiple types of sound data in advance for each state indicated by the state information (for example, parameters) of the training object. As an example, when the singing ability is low, the generation unit 11D generates output sound data candidates so as to include an inferior part in which the pitch is significantly shifted. Furthermore, the generation unit 11D generates candidates for output sound data such that as the singing ability increases, the amount of deviation in pitch decreases. Then, the generation unit 11D includes the plurality of types of generated sound data in the terminal audio data 12B, and stores it in the terminal storage unit 12.
  • the generation unit 11D includes the plurality of types of generated sound data in the terminal audio data 12B, and stores it in the terminal storage unit 12.
  • the generation unit 11D may also generate multiple types of sound data for each material object as output sound data candidates so that the degree of deterioration of the inferior part varies in stages according to the state information of the breeding object. good. Then, the generation unit 11D includes the plurality of types of generated sound data in the terminal audio data 12B, and stores it in the terminal storage unit 12. For example, the generation unit 11D outputs a normal pattern in which the singing ability is within a predetermined range, a pattern in an excellent state in which the singing ability exceeds the predetermined range, and a pattern in an inferior state in which the singing ability is below the predetermined range. Sound data candidates are generated in advance.
  • the generation unit 11D may generate output sound data or inferior tone data, etc. according to musical score data in which the position of the inferior part in the song is set. For example, a song made entirely of degraded sounds will be difficult for the user to hear. Therefore, in the musical score data, the inferior part is set at a position corresponding to a predetermined part of the song so that the entire sound does not become deteriorated. For example, inferior parts are set at positions corresponding to the beginning of a song, the end of a song, a high-pitched part, a low-pitched part, a part with complicated lyrics, and the like. The generation unit 11D generates output sound data, inferior sound data, etc. so that inferior parts are provided at these positions.
  • parts that may be used as inferior parts parts that may be used as production parts, parts that should not be used as inferior parts, or parts that should not be used as production parts may be predetermined in accordance with the musical piece. In other words, there may be predetermined locations in the music that can be made into superior or inferior parts or parts that cannot be made into superior or inferior parts.
  • the content of the inferior part may be set in the musical score data.
  • the contents of the inferior part include, for example, the degree of inferiority and the mode of inferiority.
  • the content of the inferior part is the extent to which the voice becomes soft or loud, or the manner in which the voice becomes hoarse. Then, the generation unit 11D generates output sound data such that the sound is degraded according to the contents of the inferior portion set in the musical score data.
  • the position of the inferior part and the contents of the inferior part may be set for each character, which is a game object.
  • a character with a low voice is set so that the pitch of the high-pitched part is lower than that of the original song.
  • characters who are easily nervous are set so that their voices become hoarse when they start singing.
  • characters with low physical strength are set so that their voices become hoarse at the end of the song.
  • the contents of the inferior part may be set according to the parameters of the training object before the live performance starts, or the parameters of the training object that change during the live performance, or the like. For example, musical score data for a person with low physical strength is set so that the voice becomes hoarse at the end of the song.
  • the generation unit 11D may divide the characters, which are game objects, into a plurality of types, and use musical score data in which patterns of inferior parts differ for each type. For example, the generation unit 11D uses musical score data of a pattern for shifting the pitch higher, a pattern for shifting the pitch lower, a pattern for accelerating the timing, and a pattern for delaying the timing. Specifically, a pattern is determined for each character, and the generation unit 11D uses musical score data according to the pattern. For example, the generation unit 11D generates output sound data for character A according to musical score data in which an inferior part of a pattern that shifts the pitch to a higher side is set. Further, the generation unit 11D generates output sound data for character B according to the musical score data in which the inferior part of the pattern for accelerating the timing is set. This allows the number of musical score data to be created to be reduced.
  • the score data is created by the administrator of the server 30 or the user of the game terminal 10.
  • the musical score data may be automatically generated by the game terminal 10 or the server 30.
  • the use of musical score data is merely an example, and the portion to be degraded does not need to be set in advance.
  • the data acquisition unit 11B may randomly select a portion to be degraded, degrade the selected portion, and generate output sound data.
  • the generation unit 11D may generate the output sound data by referring to a table showing the proportion of inferior parts according to the state information (for example, parameters) of the breeding object.
  • the generation unit 11D refers to a table that associates low singing ability below a predetermined value with the ratio of inferior parts or the usage ratio of inferior sound data. Then, when the singing ability is below a predetermined value, the generation unit 11D generates output sound data such that the proportion of the inferior part or the usage proportion of the inferior sound data associated with the low singing ability is reflected.
  • the generation unit 11D refers to a table that associates singing ability higher than a predetermined value with the proportion of inferior parts or the proportion of use of inferior sound data. Then, when the singing ability is higher than a predetermined value, the generation unit 11D generates output sound data so as to reflect the proportion of the inferior part or the usage proportion of the inferior sound data associated with the high singing ability.
  • the generation unit 11D may generate the output sound data in a manner different from the manner described above, as long as it can generate multiple types of sound data corresponding to songs of different skill levels as output sound data candidates. For example, the generation unit 11D may create data for each phoneme from the pronunciation of a sentence by a human performer, and generate output sound data, inferior sound data, etc. based on the data.
  • the output sound data described above includes an inferior part or a performance part only in the song part (that is, the sung part).
  • the song parts that is, the parts played
  • the song parts in the first to third note data correspond to songs played according to the musical score, and do not include inferior parts.
  • the state acquisition unit 11A acquires state information indicating the state of the game object.
  • the game object is, for example, a breeding object, and the states of the breeding object may include a poor state, a superior state, and other normal states.
  • the state of the nurturing object may be divided into a plurality of stages, for example, four or more stages such as high, slightly high, slightly low, and low.
  • the data acquisition unit 11B acquires output sound data according to the state based on the status information acquired by the status acquisition unit 11A.
  • the data acquisition unit 11B when a parameter (for example, singing ability or physical strength) is lower than a predetermined value, or when an abnormal condition such as illness or injury occurs, the data acquisition unit 11B generates an output sound corresponding to the inferior condition. Get data.
  • the parameter when the parameter is higher than a predetermined value, or when no abnormal condition such as illness or injury has occurred, the data acquisition unit 11B acquires output sound data corresponding to an excellent condition.
  • the predetermined value may be one or two or more. In other words, two or more means, for example, that the predetermined value used to determine whether the condition is inferior or not is different from the predetermined value used to determine whether the condition is excellent. . In this way, multiple predetermined values may be used to determine multiple states.
  • the data acquisition unit 11B may acquire output sound data corresponding to an inferior state when the value of a parameter such as mental strength is low. Furthermore, when a parameter such as stamina or physical strength is low, the data acquisition unit 11B may acquire output sound data corresponding to an inferior state.
  • the state indicated by the state information may be any state that changes depending on parameters, actions, or skills.
  • the status includes various situations such as low level, high level, fatigue, poor condition, good condition, buff, and debuff.
  • the state information is information for specifying the state.
  • the status information is a numerical value of a parameter, information indicating whether a flag such as illness or injury is on or off, or status identification information that uniquely identifies the status. For example, by determining the state based on parameters as state information, an increase or decrease in the parameters as a result of training can be reflected in the singing ability output based on the output sound data.
  • the state acquisition section 11A may determine the state indicated by the state information, and the data acquisition section 11B may obtain output sound data corresponding to the determined state.
  • the state acquisition unit 11A determines the state based on parameters associated with the breeding object.
  • the status acquisition unit 11A refers to the object data 12A and acquires the parameters of the singing ability of the training object before the start of the live performance.
  • the data acquisition unit 11B then acquires output sound data according to the state indicated by the singing ability. Then, the data acquisition unit 11B acquires output sound data corresponding to a normal state when the acquired singing ability value is within the range of 400 or more and 600 or less. Further, when the acquired singing ability value is less than 400, the data acquisition unit 11B acquires output sound data corresponding to an inferior state. Further, when the acquired singing ability value is higher than 600, the data acquisition unit 11B acquires output sound data corresponding to an excellent state.
  • the status acquisition unit 11A may acquire status information that changes while performing a live part.
  • the data acquisition unit 11B acquires output sound data corresponding to the state of the breeding object indicated by the parameters that change during the live performance.
  • the status acquisition unit 11A may collect dynamic parameters related to the live performance, such as the number of fans that increases or decreases during the live performance, the level of excitement of the entire live performance, the number of cheers, or the number of viewers when the live performance is virtually distributed within the game space. Obtain the number, amount of coins, etc. Then, the data acquisition unit 11B acquires output sound data corresponding to the excitement state of the live performance based on the acquired value.
  • the dynamic parameters for example, endurance, elevation, tension, etc.
  • the state acquisition unit 11A may acquire the increased or decreased parameters.
  • the state acquisition unit 11A may refer to the object data 12A in real time during the live performance to acquire the parameters of the endurance of the training object.
  • the data acquisition unit 11B acquires output sound data corresponding to an inferior state.
  • the data acquisition unit 11B acquires output sound data corresponding to the normal state. do.
  • the data acquisition unit 11B acquires output sound data corresponding to an excellent state.
  • the parameters change as the game progresses. For example, it increases as the game progresses, or decreases as the state of the game object deteriorates. Specifically, as the game progresses and the turn of the training part passes, the numerical value of the parameter (for example, singing ability) of the training object increases. Furthermore, when the training object becomes fatigued and its condition deteriorates, the numerical value of the parameter (for example, physical strength) of the training object becomes low. Alternatively, the parameters may decrease as the game progresses.
  • the state acquisition unit 11A may acquire state information of each game object of a unit consisting of a plurality of game objects including the breeding object.
  • the data acquisition unit 11B acquires output sound data corresponding to the state of each game object.
  • the data acquisition unit 11B acquires output sound data corresponding to the singing proficiency state of each game object based on the singing ability value of each game object.
  • the data acquisition unit 11B may acquire output sound data corresponding to the state of fatigue of each game object based on the physical strength value of each game object.
  • the status acquisition unit 11A may acquire only the status information of the training object even when the unit is performing live. In this case, it is possible to obtain output sound data according to the parameters and output audio only for the breeding object.
  • the data acquisition unit 11B acquires output sound data that corresponds to the song sung by the training object and corresponds to the state indicated by the status information acquired by the status acquisition unit 11A. For example, the data acquisition unit 11B acquires output sound data according to the state by acquiring the output sound data generated by the generation unit 11D according to the state.
  • the data acquisition unit 11B may acquire output sound data generated by the generation unit 11D in advance before the start of the live performance, or may acquire output sound data generated by the generation unit 11D in real time during the live performance. .
  • the data acquisition unit 11B may cause the generation unit 11D to generate output sound data according to the state of the breeding object. Further, the data acquisition unit 11B may select and acquire output sound data according to the state from the terminal storage unit 12 or the server storage unit 32.
  • the output sound data corresponding to the inferior state includes at least a part of the inferior part according to the state indicated by the state information. Specifically, when the state information indicates an inferior state, the data acquisition unit 11B obtains output sound data having a higher proportion of inferior parts than when the state information indicates an excellent state. Then, when the state information indicates an excellent state, the data acquisition unit 11B acquires output sound data that has a small proportion of inferior parts or does not include inferior parts.
  • the data acquisition unit 11B may select and acquire output sound data according to the state information from among a plurality of different types of sound data. For example, the data acquisition unit 11B selects and acquires output sound data according to the parameters from among a plurality of types (for example, three patterns) of sound data generated in advance. Specifically, when the singing ability is below a predetermined range and the status information indicates an inferior state, the data acquisition unit 11B selects output sound data (for example, first sound data) with a high proportion of inferior parts. get. In addition, when the singing ability is included in a predetermined range and the status information indicates a normal state, the data acquisition unit 11B outputs sound data with a small proportion of inferior parts or no inferior parts (for example, second sound data). Select and obtain.
  • a plurality of types of sound data generated in advance Specifically, when the singing ability is below a predetermined range and the status information indicates an inferior state, the data acquisition unit 11B selects output sound data (for example, first sound data) with a high proportion of inferior parts. get. In
  • the data acquisition unit 11B selects output sound data (for example, third note data) that has a small proportion of inferior parts or does not include it. and obtain it. Thereby, the process of generating output sound data each time can be omitted, and the processing load can be reduced.
  • output sound data for example, third note data
  • the data acquisition unit 11B compares the output sound data including the inferior part with the first sound data when the output sound data including at least a part of the inferior part is the first sound data. Acquire other sound data that has a small proportion or does not include inferior parts. As a result, voices containing inferior parts are less likely to be output, or are not output at all, and, for example, it is possible to express a skilled state with high singing ability, a lively state with high physical strength, etc. during the game.
  • the data acquisition unit 11B may acquire output sound data that includes at least a portion of the inferior part.
  • the data acquisition unit 11B may acquire output sound data that has less inferior parts or does not include inferior parts.
  • the data acquisition unit 11B may acquire output sound data that includes more or more performance parts.
  • the data acquisition unit 11B may acquire the output sound data of each game object of the unit including the breeding object.
  • a unit of two to seven people including the training object may sing.
  • which game object sings which part of the song may be changed by automatic selection such as placing the breeding object at the center of the unit or by user selection.
  • output sound data is generated according to the state of the entire song of each game object.
  • the data acquisition unit 11B acquires output sound data according to the states of all members of the unit.
  • the game progression unit 11C causes the audio output unit 16 to output audio based on the output audio data of all members of the unit so as not to output unnecessary portions of audio. Specifically, the game progression unit 11C causes the audio output unit 16 to output only the audio of the song part assigned to each game object, and does not output the audio of the other unassigned parts. This eliminates the need to prepare unit output sound data every time a part changes. Therefore, the number of output sound data can be reduced and data management becomes easier.
  • the data acquisition unit 11B may acquire output sound data of a game object other than the training object, depending on the state of the training object. For example, when the breeding object is in an inferior state, the data acquisition unit 11B also acquires output sound data in an inferior state from the output sound data of other game objects.
  • the data acquisition unit 11B may acquire output sound data of other game objects according to a state indicated by a predetermined parameter or a fixed state (for example, an excellent state). . Furthermore, when a unit sings, the data acquisition unit 11B may acquire predetermined output sound data (for example, third sound data) for output sound data of other game objects regardless of the parameters.
  • the game object included in the unit may be a bred object (for example, an inherited object) that has previously been bred.
  • the data acquisition unit 11B may acquire output sound data according to the state indicated by the parameters of the inherited object as the output sound data of the inherited object. For example, when the singing ability of the inherited object indicates an excellent state exceeding a predetermined range, the data acquisition unit 11B acquires output sound data corresponding to the excellent state as the output sound data of the inherited object.
  • the material object may be included in the unit.
  • the data acquisition unit 11B may acquire output sound data according to the state indicated by the parameters of the material object as the output sound data of the inheritance object. For example, when the material object shows a poor state in which the singing ability falls below a predetermined range, the data acquisition unit 11B obtains output sound data corresponding to the poor state as the output sound data of the material object. Furthermore, if the number of inherited objects is less than the number of people in the unit, the unit may be formed in this state.
  • the data acquisition unit 11B may obtain the output sound data of the entire unit that reflects the assignment of parts in the unit by causing the generation unit 11D to generate it. Thereby, the data acquisition unit 11B only needs to acquire the output sound data of the unit, and does not need to acquire the output sound data of each character. Alternatively, if the data capacity is not a problem, the generation unit 11D may generate separate song parts for each game object as the output sound data of the unit.
  • the game progress section 11C which is an example of a game progress means, simulates the growth of game objects. Then, the game progression unit 11C changes the parameters of the breeding object according to the progress of the game. For example, when a lesson for increasing the singing ability is given in the training part, the game progression unit 11C increases the singing ability of the training object and reduces the physical strength of the training object. Then, the game progression unit 11C associates the increased or decreased parameters with object identification information that uniquely identifies the breeding object, includes them in the object data 12A, and stores them in the terminal storage unit 12. Alternatively, the game progression unit 11C may cause the server storage unit 32 to store the data of the breeding object.
  • the game progression unit 11C increases the dance power of the training object when a lesson for increasing the dance power is given in the training part.
  • the game progression unit 11C may acquire a video corresponding to the dancing ability from the server storage unit 32 or the terminal storage unit 12 and display it on the terminal display unit 15.
  • the server storage unit 32 or the terminal storage unit 12 stores, as dance videos according to dance ability, an inferior dance video showing a state where the dancing ability is low and the dancing is bad, and an inferior dance video showing the state where the dancing ability is high and the dancing is good. Represents honor dance videos and memorizes them.
  • the game progression unit 11C displays this dance video during a live performance.
  • the server storage unit 32 or the terminal storage unit 12 stores operation data of the training object used depending on the training state of the training object. Then, the game progression unit 11C displays an effect based on the motion data at a predetermined timing. Thereby, the user can sense the growth of the nurturing object visually.
  • the motion data is not limited to dance videos, and may be motion data that defines the motion of the training object.
  • the game progress section 11C may cause the terminal display section 15 to display an image of the training object showing a distressed expression or a lack of confidence.
  • the game progress unit 11C may cause the terminal display unit 15 to display an image of the training object showing a smiling face or a confident expression.
  • the state information indicates an inferior state, that is, when the parameters are relatively low, relatively many expressions of pain or lack of confidence are displayed, but as the parameters increase, the state information becomes normal.
  • the condition or excellent condition is indicated, smiling or confident expressions are relatively often displayed. Thereby, the user can visually sense the growth of the nurturing object.
  • the game progression unit 11C which is an example of an audio output control unit, causes the audio output unit 16, which is an example of an audio output unit, to output audio based on the output sound data acquired by the data acquisition unit 11B.
  • the audio output section 16 is a speaker, and is configured integrally with the game terminal 10.
  • the audio output unit 16 may be separate from the game terminal 10 and connected to the game terminal 10 by wire or wirelessly.
  • the audio output unit 16 may be configured integrally with a display device that is separate from the game terminal 10. As a result, before the singing ability improves, the audio output section 16 outputs a poor song with many inferior parts.
  • the audio output section 16 outputs a good song with few inferior parts or no inferior parts. Therefore, the user can audibly feel the growth of the nurturing object. Thereby, the user can sense the training results of the training object through the song sung by the training object.
  • the server control unit 31 of the server 30 is configured as a computer and includes a processor (not shown).
  • This processor is, for example, a CPU or an MPU, and controls the entire server 30 based on a program stored in the server storage unit 32, and also controls various processes in an integrated manner.
  • the server control unit 31 can also perform control according to a program stored in a portable recording medium such as a CD, DVD, CF card, and USB memory, or in an external storage medium.
  • an operation section including a keyboard or various switches for inputting predetermined commands and data is connected to the server control section 31 by wire or wirelessly.
  • a display section (not shown) that displays the input state, setting state, measurement results, and various information of the device is connected to the server control section 31 by wire or wirelessly.
  • the server storage unit 32 is a computer-readable non-transitory storage medium. Specifically, the server storage unit 32 includes storage devices such as RAM, ROM, HDD, and SSD. The server storage unit 32 also stores server audio data 32A. Further, the server storage unit 32 may store data such as image data or music data necessary for progressing the game, update data for the terminal program PG, and the like.
  • the server communication unit 33 is a communication module, a communication interface, or the like. The server communication unit 33 allows data to be transmitted and received between the game terminal 10 and the server 30 via the network 50.
  • the game progression unit 11C simulates the training of the training object. Then, the game progress unit 11C changes the parameters of the breeding object according to the progress of the game (S101). After that, when starting the live part, the status acquisition unit 11A acquires, for example, singing ability as a parameter that is status information of the training object (S102). Further, the status acquisition unit 11A passes the acquired parameters to the data acquisition unit 11B (S103).
  • the data acquisition unit 11B acquires first sound data, which is output sound data with many inferior parts (S105).
  • the data acquisition unit 11B acquires the first sound data generated by the generation unit 11D.
  • the game progression unit 11C uses the first sound data acquired by the data acquisition unit 11B as output sound data, and causes the audio output unit 16 to output a sound based on this data (S106).
  • the data acquisition unit 11B acquires third sound data, which is output sound data without inferior parts (S108).
  • the data acquisition unit 11B acquires the third sound data generated by the generation unit 11D.
  • the game progression unit 11C uses the third sound data acquired by the data acquisition unit 11B as output sound data, and causes the audio output unit 16 to output a sound based on this data (S106).
  • the data acquisition unit 11B acquires second sound data that is output sound data with fewer inferior parts (S109).
  • the data acquisition unit 11B acquires the second sound data generated by the generation unit 11D.
  • the game progression unit 11C uses the second sound data acquired by the data acquisition unit 11B as output sound data, and causes the audio output unit 16 to output a sound based on the second sound data (S106). In this way, in the live part, audio is output based on output sound data according to the singing ability.
  • the game system 100 it is possible to output sound based on output sound data acquired according to parameters. Therefore, the user can audibly sense the results of growing the growing object.
  • the data acquisition unit 11B may acquire output sound data according to the value of the parameter of the game object or the progress of the game.
  • the data acquisition unit 11B specifies output sound data according to the value of a parameter (for example, physical strength) or the progress of the game from among a plurality of types of output sound data.
  • the data acquisition unit 11B then acquires the specified output sound data.
  • the terminal storage unit 12 stores a table that associates data identification information that specifies output sound data with parameters or progress of the game. Then, the data acquisition unit 11B refers to the table and specifies the output sound data.
  • the data acquisition unit 11B will acquire the sound data of the song played by the training object, which the state acquisition unit 11A will acquire.
  • Output sound data may be acquired according to the state indicated by the state information.
  • the generation unit 11D generates output sound data of a song according to the state.
  • the game progression unit 11C then causes the audio output unit 16 to output audio based on the output sound data acquired by the data acquisition unit 11B.
  • the terminal audio data 12B includes various sound data of the song related to the performance, similar to the sound data of the song described above.
  • an inferior part of the output sound data of a song corresponding to an inferior state differs from the corresponding part of the original song in pitch, output timing, or loudness.
  • the output sound data of the music piece corresponding to the excellent condition includes at least a part of the performance portion that has been performed to reflect the performance technique.
  • the output sound data includes a performance portion that is performed using a performance technique such as rapid playing of a guitar.
  • the generation unit 11D may acquire reference sound data of a song and generate output sound data based on the acquired reference sound data and the like.
  • the data acquisition unit 11B may acquire sound data of a song sung by the breeding object and sound data of a song played by the breeding object as output sound data according to the state.
  • the game progression unit 11C may cause the audio output unit 16 to output audio based on the output sound data of the song and the song.
  • a second embodiment will be described with reference to FIG.
  • the second embodiment differs from the first embodiment in that the server 230 includes a status acquisition unit 211A and a generation unit 211D.
  • differences from the first embodiment will be described, and the same reference numerals will be given to the components that have already been described, and the description thereof will be omitted.
  • components with the same reference numerals have substantially the same operations and functions, and their effects are also substantially the same.
  • the server storage unit 232 of the server 230 stores a server program PG2, which is an example of a game program. Then, the server program PG2 causes the server control section 231 as a computer to function as a state acquisition section 211A and a generation section 211D.
  • the status acquisition unit 211A acquires status information indicating the status of a training object, which is a game object to be trained, from the game terminal 210.
  • the data acquisition unit 11B of the game terminal 210 requests the server 230 for output sound data, and also transmits parameters as status information to the server 230. Note that when generating output sound data according to parameters that change during a live performance, the data acquisition unit 11B not only transmits the parameters when starting the live performance, but also transmits the parameters to the server 230 every time the parameters change. .
  • the status acquisition unit 211A then passes the received status information to the generation unit 211D. Furthermore, the generation unit 211D generates output sound data according to the state indicated by the status information acquired by the status acquisition unit 211A.
  • the generation unit 211D generates output sound data according to the state indicated by the state information (for example, parameters) of the breeding object acquired by the state acquisition unit 211A.
  • the server storage unit 32 stores generation data, musical score data, and audio creation software.
  • the generation unit 211D generates output sound data using generation data and musical score data in real time before and during the live performance.
  • the server control unit 231 transmits the generated output sound data to the game terminal 210 in a streaming distribution manner.
  • the data acquisition unit 11B of the game terminal 210 acquires output sound data according to the state by acquiring the transmitted output sound data.
  • the game progress section 11C causes the audio output section 16 to output audio based on the output sound data.
  • the amount of data downloaded from the server 230 can be reduced. Furthermore, since the proportion of inferior parts can be finely changed according to the parameters, the degree of expression of changes in proficiency level can be improved. Furthermore, even if the parameters of the training object change due to user operations during a live performance, output sound data can be generated with the proportion of inferior parts changed in accordance with the changed parameters. In addition, output sound data including a necessary proportion of inferior parts can be generated without generating many types of output sound data.
  • the generation unit 211D may generate the output sound data using the generation data before the start of the live performance (for example, at the timing when the status acquisition unit 211A acquires the status information), instead of generating the output sound data in real time during the live performance.
  • the generation unit 211D includes the generated output sound data in the server audio data 32A and stores it in the server storage unit 32.
  • the server control unit 231 transmits the generated output sound data to the game terminal 210 in response to a download request from the data acquisition unit 11B. Thereby, the amount of data downloaded from the server 230 can be reduced. Furthermore, since the proportion of inferior parts can be finely changed according to the parameters, the degree of expression of changes in proficiency level can be improved.
  • the generation unit 211D may increase or decrease the proportion of the inferior portion continuously or stepwise according to the parameter. Further, the proportion of the inferior part may be determined according to a table in which the proportion of the inferior part is defined according to the parameter.
  • the generation unit 211D may generate the output sound data before the start of the live performance based on the reference sound data of the song. Specifically, the generation unit 211D generates inferior sound data, which is an example of the first sound data that includes an inferior part in at least a part, and the second sound data, which has a smaller proportion of the inferior part compared to the first sound data, or has an inferior part. Output sound data is generated by mixing with reference sound data, which is an example of at least one other sound data that is not included. This allows output sound data including inferior parts to be generated through mixing, thereby reducing the load on the generation process. Note that the generation unit 211D can further use superior sound data for mixing as other sound data.
  • the generation unit 211D generates the reference tone data, inferior tone data, and superior tone data in advance using the generation data and musical score data.
  • the generation unit 211D may generate reference sound data, inferior sound data, and superior sound data when generating the output sound data.
  • the generation unit 211D increases or decreases the usage ratio of each of the inferior sound data, the reference sound data, and the superior sound data according to the parameters that the state acquisition unit 211A acquires from the game terminal 210. Furthermore, the usage ratio of these data may be determined in advance for each parameter.
  • the generation unit 211D may generate output sound data in real time before and during the live performance. At this time, the server control unit 231 transmits the generated output sound data to the game terminal 210 in a streaming distribution manner. Furthermore, the generation unit 211D may generate the output sound data at the timing when the status acquisition unit 211A acquires the status information. At this time, the server control unit 231 transmits the generated output sound data to the game terminal 210 in response to the download request from the data acquisition unit 11B.
  • the generation unit 211D may generate multiple types of output sound data such that the degree of deterioration of the inferior part varies in stages according to the state information of the breeding object. That is, the generation unit 211D may generate a plurality of patterns (for example, three patterns) of output sound data in advance according to the state information of the breeding object. Then, the generation unit 211D includes the generated plural types of sound data in the server audio data 32A, and stores the server audio data 32A in the server storage unit 32.
  • the generation unit 211D generates output sound data corresponding to a normal state in which the singing ability is within a predetermined range, output sound data corresponding to an excellent state in which the singing ability exceeds the predetermined range, and output sound data corresponding to an excellent state in which the singing ability exceeds the predetermined range.
  • Output sound data corresponding to an inferior state is generated in advance.
  • the data acquisition unit 11B of the game terminal 210 also functions as a status acquisition unit and acquires status information indicating the status of the breeding object. Then, the data acquisition unit 11B transmits to the server 230 a download request for output sound data according to the state of the training object. The data acquisition unit 11B acquires output sound data according to the state by acquiring the requested output sound data from the server 230. Thereby, the amount of data stored in the terminal storage section 12 of the game terminal 210 can be reduced. Furthermore, since the game terminal 210 does not generate output sound data, the processing load on the game terminal 210 can be reduced.
  • the data acquisition unit 11B requests the server 230 to download the output sound data specified according to the parameters. Specifically, when the singing ability is within a predetermined range, the data acquisition unit 11B requests output sound data (for example, second sound data) corresponding to a normal state. Further, when the singing ability exceeds a predetermined range, the data acquisition unit 11B requests output sound data (for example, third sound data) corresponding to an excellent state. Furthermore, when the singing ability falls below a predetermined range, the data acquisition unit 11B requests output sound data (for example, first sound data) corresponding to the inferior state.
  • the terminal storage unit 12 stores a table that associates data identification information that specifies output sound data with parameters or states. Then, the data acquisition unit 11B specifies the output sound data with reference to the table, and transmits the output sound data of the specified data identification information to the server 230.
  • the server storage unit 232 may store multiple types of output sound data in association with state information or a state specified by the state information.
  • the server storage unit 232 may store output sound data in association with parameters as state information. Thereby, output sound data can be specified and transmitted to the game terminal 210 based on the parameters acquired by the state acquisition unit 211A.
  • the server storage unit 232 may store output sound data in association with a poor state, a normal state, and an excellent state. Thereby, output sound data can be specified and transmitted to the game terminal 210 according to the state indicated by the parameter acquired by the state acquisition unit 211A.
  • the output sound data may be transmitted in the form of streaming distribution. In the case of streaming distribution, at least a part of the output sound data according to the request is included in the terminal audio data 12B and stored in the terminal storage unit 12.
  • the data acquisition unit 11B may request the server 230 to download output sound data corresponding to all states of the breeding object.
  • the output sound data of the breeding object is stored in the terminal storage section 12.
  • the data acquisition unit 11B selects and acquires necessary output sound data from the downloaded output sound data according to the state of the breeding object. For example, when the singing ability is below a predetermined range, the data acquisition unit 11B selects and acquires output sound data (for example, first sound data) corresponding to the inferior state.
  • the server control unit 231 may select output sound data according to the state indicated by the state information acquired from the game terminal 210 by the state acquisition unit 211A, and transmit it to the game terminal 210. Specifically, when the singing ability is within a predetermined range, the server control unit 231 selects output sound data (for example, second sound data) corresponding to a normal state. Further, when the singing ability exceeds a predetermined range, the server control unit 231 selects output sound data (for example, third sound data) corresponding to an excellent state. Furthermore, when the singing ability falls below a predetermined range, the server control unit 231 selects output sound data (for example, first sound data) corresponding to the inferior state. As an example, the server storage unit 32 stores a table that associates data identification information that specifies output sound data with parameters or states. Then, the server control unit 231 refers to the table and selects output sound data.
  • the server storage unit 32 stores a table that associates data identification information that specifies output sound data with parameters or states. Then, the server control unit
  • the data acquisition unit 11B transmits status information (for example, parameters) to the server 230. Then, the server control unit 231 selects output sound data according to the state indicated by the state information acquired from the game terminal 210 by the state acquisition unit 211A, and transmits the selected output sound data to the game terminal 210. The data acquisition unit 11B acquires the output sound data selected by the server control unit 231 from the server 230, thereby acquiring output sound data according to the state.
  • the server storage unit 232 stores a plurality of types of output sound data in association with state information or a state specified by the state information. Thereby, the server control unit 231 can select output sound data and transmit it to the game terminal 210 based on the state information or state.
  • the game system 200 it is possible to output sound based on output sound data acquired according to parameters. Therefore, the user can audibly sense the results of growing the growing object.
  • the generation data is generated by machine learning using original data obtained from a performer's singing or reading as learning data.
  • the musical score data is created by the administrator of the server 30 or the user of the game terminal 10, or is automatically generated.
  • the output sound data may be directly generated from the generation data and the musical score data. Further, the output sound data may be generated by appropriately mixing inferior tone data, reference tone data, and superior tone data generated from the generation data and musical score data. The conditions for mixing are as described above.
  • the output sound data is obtained from a plurality of sound data such as first sound data, second sound data, and third sound data generated based on the inferior sound data, reference sound data, and superior sound data under predetermined conditions. may be selected based on. Furthermore, each of the inferior tone data, reference tone data, and superior tone data may be used as they are as the first tone data, the second tone data, and the third tone data. Further, the musical score data may be fixed, but may be changed as appropriate based on state information indicating the state of the object.
  • status information that changes as the game progresses is acquired at a predetermined timing, and based on the acquired status information, instructions for the part to be sung as the inferior part or the part to be sung as the superior part are provided in the musical score data. Changes such as instructions may be made. Then, output sound data is generated based on the changed musical score data and the generation data.
  • inferior tone data, reference tone data, and superior tone data are generated based on the changed musical score data and generation data, and data to be output as output sound data is selected from among these tone data. Good too.
  • inferior tone data, reference tone data, and superior tone data as intermediate tone data may be generated based on the changed musical score data and generation data.
  • the output sound data may be generated by appropriately mixing the intermediate sound data.
  • each sound data and the like may be generated by the server 30 or by the game terminal 10.
  • the output sound data is downloaded to the game terminal 10 at a predetermined timing.
  • at least the server 30 holds the musical score data.
  • intermediate sound data or generation data is generated by the server 30 and downloaded to the game terminal 10 at an appropriate timing.
  • the musical score data is held by the game terminal 10 or the server 30, or held by the game terminal 10 and the server 30.
  • the generation data is downloaded to the game terminal 10 at an appropriate timing.
  • the game terminal 10 at least holds the musical score data. Furthermore, changes to the musical score data may be made at the server 30 or at the game terminal 10. Further, the output sound data may be sequentially generated and played back during a live performance. Further, the output sound data may be selected or generated immediately before the live performance, and may be played back during the live performance. Alternatively, the generation of the output sound data may be started by referring to the status information at a predetermined timing in the middle of the section, and the generation may be completed by the time of the live performance. When this generation is performed by the server 30, the download of the output sound data to the game terminal 10 may be performed in parallel with the progress of the game performed by user operations, etc., by the time of live execution.
  • a plurality of sound data may be held in advance in the server 30 as output sound data.
  • the sound data to be used as output sound data is determined from the plurality of sound data by referring to the state information at a predetermined timing in the middle of the section, and in parallel with the progress of the game performed by the user's operation etc.
  • the determined sound data may be downloaded to the game terminal 10. By doing these things, it is possible to reduce the amount of time the user is kept waiting for the download process.
  • the state acquisition unit 11A, the data acquisition unit 11B, and the generation unit 11D may be provided separately for the server 30 and the game terminal 10.
  • the game terminal 10 may be provided with the state acquisition section 11A that determines the state, and the generation section 11D may be provided on the server 30.
  • the server control section 31 and the terminal control section 11 cooperate to function as a computer.
  • the data acquisition unit 11B may function as a generation unit that generates output sound data.
  • the generating means may be provided outside the game system 100, 200.
  • the generation units 11D and 211D can be omitted.
  • at least one of the generation data, reference sound data, inferior sound data, superior sound data, and output sound data is stored in the terminal storage unit 12 or the server storage unit 32, 232 in advance or as necessary.
  • the generation units 11D and 211D may obtain generation data from outside the game system 100 and 200 to generate reference sound data, inferior sound data, superior sound data, or output sound data.
  • the generation units 11D and 211D may obtain at least one of reference sound data, inferior sound data, and superior sound data from outside the game system 100 and 200, and generate the output sound data.
  • the output sound data may be data generated by recording a song sung by a human.
  • the reference tone data, inferior tone data, and superior tone data may also be data generated by recording a song sung by a human. In these cases, the output sound data is stored in the terminal storage unit 12 or the server storage unit 32, 232 in advance or as needed.
  • the training method performed in the training part is not limited to the method described above, as long as it can grow the training object.
  • it may be a training method in which multiple objects including the training object walk around the field and encounter enemies and fight, or a training method may be used in which a number of objects, including the training object, encounter and fight enemies, or the training object's parameters may be set by combining cards etc. obtained by winning a lottery.
  • It may also be a type of cultivation method that increases the.
  • a type of training method may be used in which the user performs a so-called timing game, in which the user performs an operation at the timing when an indicator that moves within the screen in accordance with the rhythm reaches a predetermined point.
  • the generation units 11D and 211D may generate output sound data, inferior sound data, or superior sound data based on the reference sound data. Specifically, the generation units 11D and 211D may generate inferior sound data by modifying the reference sound data so that at least a part thereof becomes an inferior part, or the reference sound data at least a part of which becomes a production part. Honor tone data may be generated by making such changes. In addition, the generation units 11D and 211D appropriately modify the reference sound data so that at least a part thereof becomes an inferior part or a production part, and generate, for example, first to third sound data as candidates for output sound data. You can.
  • a game system 100,200 state acquisition means 11A, 211A for acquiring state information indicating the state of the game object; Data acquisition means 11B, 211B that acquires output sound data of the song or the song according to the state indicated by the state information; A game system 100, 200 comprising a sound output control means 11C that outputs sound based on the acquired output sound data.
  • the data acquisition means 11B, 211B acquires the output sound data in which the proportion of the inferior part is higher when the status information indicates an inferior status, compared to when the status information indicates an excellent status.
  • the game system 100, 200 according to appendix 2.
  • the state information is a parameter, The game system 100, 200 according to any one of Supplementary Notes 1 to 4, further comprising a game progressing means 11C that simulates the growth of the game object and changes the parameters according to the progress of the game.
  • Appendix 6 The game system 100, 200 according to appendix 5, wherein the data acquisition means 11B, 211B selects and acquires the output sound data according to the parameters from among a plurality of different types of sound data.
  • Appendix 10 The game system 100, 200 according to appendix 8 or 9, wherein the other sound data includes at least a part of a performance portion that is performed to reflect a performance technique or a singing technique.
  • the computer 11, 231 is provided, and the training of a game object to be raised is simulated, and an effect is performed in which the game object performs a song or sings a song.
  • Game programs PG and PG2 of game systems 100 and 200 that provide games that output songs,
  • Data acquisition means 11B, 211B that acquires output sound data of the song or the song according to the state indicated by the state information;
  • Game programs PG and PG2 function as an audio output control means 11C that outputs audio based on the acquired output sound data.
  • the computer 11, 231 is provided, and the training of a game object to be raised is simulated, and an effect is performed in which the game object performs a song or sings a song.
  • a method of controlling a game system 100, 200 that provides a game that outputs a song the method comprising: The computer 11, 231, obtain state information indicating the state of the game object; Obtaining output sound data of the song or the song that corresponds to the state indicated by the state information; A control method for outputting sound based on the acquired output sound data.
  • the game programs PG and PG2 described in appendix 12, or the control method described in appendix 13 a sound corresponding to the training result is output, so that the user allows you to audibly sense the results of growing a growing object. Further, by providing multiple opportunities for outputting such audio in one training part, the user can feel that the music being played or the song being sung improves through training. Furthermore, according to the game systems 100 and 200 described in Supplementary Notes 4 and 6, the process of generating output sound data each time can be omitted, and the processing load can be reduced. Furthermore, according to the game systems 100 and 200 described in Supplementary Notes 5 to 7, the state can be determined based on parameters as state information. This allows changes in parameters as a result of training to be reflected in the quality of the output sound data.
  • output sound data including inferior parts can be generated by mixing without generating a large number of output sound data. Furthermore, according to the game systems 100 and 200 described in Appendix 11, output sound data can be generated by including inferior portions in the reference sound data without generating a large number of output sound data.
  • Terminal control unit (computer) 11A: Status acquisition unit (status acquisition means) 11B: Data acquisition unit (data acquisition means) 11C: Game progress section (audio output control means) 11D: Generation unit (generation means) 16: Audio output section (sound output means) 100: Game system 200: Game system 211A: State acquisition unit (state acquisition means) 211B: Data acquisition unit (data acquisition means) 211D: Generation unit (generation means) 231: Server control unit (computer) PG: Terminal program (game program) PG2: Server program (game program)

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

A game system 100 is for providing a game for simulating rearing of a game object to be reared, producing instrumental performance of a musical piece or vocal performance of a song by the game object, and outputting the musical piece being instrumentally performed or the song being vocally performed, the game system comprising: a status acquisition means 11A for acquiring status information indicating the status of the game object; a data acquisition means 11B for acquiring output sound data, of the musical piece or the song, in accordance with the status indicated by the status information; and a speech output control means 11C for outputting speech based on the acquired output sound data.

Description

ゲームシステム、ゲームシステムのゲームプログラム及び制御方法Game system, game program and control method for the game system
 本発明は、育成対象となるゲームオブジェクトの育成をシミュレーションして、ゲームオブジェクトの状態に応じた音声を出力するゲームシステム、ゲームシステムのゲームプログラム及び制御方法に関する。 The present invention relates to a game system, a game program for the game system, and a control method for simulating the growth of a game object to be grown and outputting audio according to the state of the game object.
 特許文献1には、音声ミックスダウン装置が開示されている。この音声ミックスダウン装置は、音声ファイル入力部と、ミックスダウン制御部と、エフェクト処理部と、合成音声データ出力部とを備えている。音声ファイル入力部は、ユーザが使用する機器においてカラオケを利用して録音された音声ファイルを入力すると、識別データと対応付けて録音音声ファイルデータベースに記憶させる。 Patent Document 1 discloses an audio mixdown device. This audio mixdown device includes an audio file input section, a mixdown control section, an effect processing section, and a synthesized audio data output section. When the audio file input unit inputs an audio file recorded using karaoke on a device used by a user, the audio file input unit stores the input audio file in a recorded audio file database in association with identification data.
 また、ミックスダウン制御部は、ユーザからの要求に応じて、録音音声ファイルデータベースから合成の対象とする複数の録音音声ファイルを選択する。エフェクト処理部は、複数の録音音声ファイルに対してエフェクト処理を施すと共に、合成音声データを生成する。合成音声データ出力部は、生成された合成音声データを出力する。 In addition, the mixdown control unit selects a plurality of recorded audio files to be synthesized from the recorded audio file database in response to a request from a user. The effect processing section performs effect processing on a plurality of recorded audio files and generates synthesized audio data. The synthesized voice data output unit outputs the generated synthesized voice data.
特開2008-051896号公報Japanese Patent Application Publication No. 2008-051896
 ゲームに登場するキャラクタ等のゲームオブジェクトが歌唱する演出が行われ、その歌がゲームをプレイしている途中に出力されることがある。また、このようなゲームとして、ユーザがキャラクタを育成して、キャラクタのパラメータを向上させるような育成ゲームがある。この育成ゲームには、例えば、育成したキャラクタが歌唱する演出が含まれる。 A performance in which game objects such as characters appearing in the game sing may be performed, and the song may be output while the game is being played. Further, as such a game, there is a training game in which a user trains a character and improves the parameters of the character. This training game includes, for example, a performance in which the trained character sings.
 しかし、ゲーム中のキャラクタは、パラメータが向上しても、実際に歌が上達するわけではない。そのため、当該キャラクタは、育成の前後を通じて同じレベルの上手さで歌唱する。したがって、キャラクタの育成の程度の如何に関わらず出力される音声が同一のため、キャラクタを育成した成果をユーザが感得しにくいという課題が存在する。そこで、かかる課題に鑑み、ユーザがキャラクタを育成した成果をより感得出来る仕組みが望まれている。 However, even if the parameters of the characters in the game improve, their singing does not actually improve. Therefore, the character sings at the same level of proficiency before and after training. Therefore, since the same sound is output regardless of the level of character development, there is a problem in that it is difficult for the user to perceive the results of character development. Therefore, in view of this problem, a system is desired that allows the user to gain a better sense of the results of character development.
 一態様に係るゲームシステムは、育成対象となるゲームオブジェクトの育成をシミュレーションして、前記ゲームオブジェクトによって曲の演奏又は歌の歌唱がなされる演出が行われ、演奏される前記曲又は歌唱される前記歌を出力するゲームを提供するゲームシステムであって、前記ゲームオブジェクトの状態を示す状態情報を取得する状態取得手段と、前記曲又は前記歌の音データであって、前記状態情報が示す前記状態に応じた出力音データを取得するデータ取得手段と、取得した前記出力音データに基づく音声を出力させる音声出力制御手段とを備える。 A game system according to one embodiment simulates the raising of a game object to be raised, performs an effect in which the game object plays a song or sings a song, and performs an effect in which the game object performs a song or sings a song. A game system that provides a game that outputs a song, comprising: a state acquisition means for acquiring state information indicating a state of the game object; and a state obtaining means for obtaining state information indicating a state of the game object; and a state obtaining means for obtaining state information indicating the state of the song or the sound data of the song. The apparatus includes a data acquisition means for acquiring output sound data according to the output sound data, and an audio output control means for outputting sound based on the acquired output sound data.
 また、他の一態様に係るゲームプログラムは、コンピュータを備えるとともに、育成対象となるゲームオブジェクトの育成をシミュレーションして、前記ゲームオブジェクトによって曲の演奏又は歌の歌唱がなされる演出が行われ、演奏される前記曲又は歌唱される前記歌を出力するゲームを提供するゲームシステムのゲームプログラムであって、前記コンピュータを、前記ゲームオブジェクトの状態を示す状態情報を取得する状態取得手段と、前記曲又は前記歌の音データであって、前記状態情報が示す前記状態に応じた出力音データを取得するデータ取得手段と、取得した前記出力音データに基づく音声を出力させる音声出力制御手段として機能させる。 In addition, a game program according to another aspect includes a computer, simulates the raising of a game object to be raised, performs an effect in which the game object plays a song or sings a song, and performs a performance. A game program for a game system that provides a game that outputs the song to be played or the song to be sung; It functions as a data acquisition means for acquiring output sound data of the song, which corresponds to the state indicated by the state information, and an audio output control means for outputting a sound based on the acquired output sound data.
 また、他の一態様に係る制御方法は、コンピュータを備えるとともに、育成対象となるゲームオブジェクトの育成をシミュレーションして、前記ゲームオブジェクトによって曲の演奏又は歌の歌唱がなされる演出が行われ、演奏される前記曲又は歌唱される前記歌を出力するゲームを提供するゲームシステムの制御方法であって、前記コンピュータに、前記ゲームオブジェクトの状態を示す状態情報を取得させ、前記曲又は前記歌の音データであって、前記状態情報が示す前記状態に応じた出力音データを取得させ、取得した前記出力音データに基づく音声を出力させる。 In addition, a control method according to another aspect includes a computer, simulates the raising of a game object to be raised, performs an effect in which the game object plays a song or sings a song, and performs the performance. A control method for a game system that provides a game that outputs the song to be played or the song to be sung, wherein the computer acquires state information indicating a state of the game object, and outputs the sound of the song or the song. output sound data corresponding to the state indicated by the state information is acquired, and sound based on the acquired output sound data is output.
 これにより、育成した結果に応じた音声が出力されるため、ユーザは、育成オブジェクトの育成の成果を聴覚的に感得できる。 As a result, a sound corresponding to the training result is output, so that the user can audibly sense the result of training the training object.
ゲームシステムの全体構成を示す概略図。A schematic diagram showing the overall configuration of the game system. ゲームの概略図。Schematic diagram of the game. 第1実施形態のゲームシステムの概略ブロック図。FIG. 1 is a schematic block diagram of a game system according to a first embodiment. 劣等部分の割合を示す概略図。Schematic diagram showing the proportion of inferior parts. 劣等部分及び演出部分の説明図。An explanatory diagram of the inferior part and the performance part. 出力音データの取得のフローチャート。Flowchart for acquiring output sound data. 第2実施形態のゲームシステムの概略ブロック図。FIG. 2 is a schematic block diagram of a game system according to a second embodiment.
 以下、本発明を実施するための例示的な実施形態を、図面を参照して詳細に説明する。ただし、以下の実施形態において説明する寸法、材料、形状及び構成要素の相対的な位置は任意に設定でき、本発明が適用される装置又は方法の構成又は様々な条件に応じて変更できる。また、特別な記載がない限り、本発明の範囲は、以下に具体的に記載された実施形態に限定されない。なお、以下の説明において、識別情報は、文字、数字、記号、画像、又はこれらの組み合わせによって構成されるデータである。 Hereinafter, exemplary embodiments for implementing the present invention will be described in detail with reference to the drawings. However, the dimensions, materials, shapes, and relative positions of components described in the following embodiments can be set arbitrarily, and can be changed depending on the configuration or various conditions of the apparatus or method to which the present invention is applied. Furthermore, unless otherwise specified, the scope of the present invention is not limited to the embodiments specifically described below. Note that in the following description, identification information is data composed of letters, numbers, symbols, images, or a combination thereof.
 図1は、ゲームシステム100の全体構成を示す概略図である。図1に示すように、ゲームシステム100は、ユーザ端末の一例であるゲーム端末10と、サーバ30とを含む。サーバ30は、複数のサーバユニット52が組み合わされることにより一台の論理的なサーバとして構成されている。ただし、単一のサーバユニット52によりサーバ30が構成されてもよい。あるいは、クラウドコンピューティングを利用して論理的にサーバ30が構成されてもよい。 FIG. 1 is a schematic diagram showing the overall configuration of a game system 100. As shown in FIG. 1, the game system 100 includes a game terminal 10, which is an example of a user terminal, and a server 30. The server 30 is configured as one logical server by combining a plurality of server units 52. However, the server 30 may be configured by a single server unit 52. Alternatively, the server 30 may be configured logically using cloud computing.
 サーバ30は、ネットワーク50に接続できるように構成されている。一例として、ネットワーク50は、TCP/IPプロトコルを利用してネットワーク通信を実現するように構成されている。具体的には、ローカルエリアネットワークLANが、サーバ30とインターネット51とを接続している。そして、WANとしてのインターネット51とローカルエリアネットワークLANとが、ルータ53を介して接続されている。また、ゲーム端末10も、インターネット51に接続されるように構成されている。サーバ30は、ローカルエリアネットワークLANによって、又はインターネット51によって、相互に接続されていてもよい。なお、ネットワーク50は、専用線、電話回線、企業内ネットワーク、移動体通信網、その他の通信回線、及びそれらの組み合わせ等のいずれであってもよく、有線であるか無線であるかを問わない。 The server 30 is configured to be connectable to the network 50. As an example, network 50 is configured to utilize the TCP/IP protocol to implement network communications. Specifically, a local area network LAN connects the server 30 and the Internet 51. The Internet 51 as a WAN and a local area network LAN are connected via a router 53. Further, the game terminal 10 is also configured to be connected to the Internet 51. Servers 30 may be interconnected by a local area network LAN or by the Internet 51. Note that the network 50 may be a leased line, a telephone line, an in-house network, a mobile communication network, another communication line, or a combination thereof, and it does not matter whether it is wired or wireless. .
 ゲーム端末10は、ユーザが操作するコンピュータ装置である。例えば、ゲーム端末10は、据置型又はブック型のパーソナルコンピュータ54、スマートフォンを含む携帯電話機のようなモバイル端末装置55を含む。その他にも、据置型の家庭用ゲーム装置、携帯型ゲーム装置、携帯型タブレット端末装置、及び業務用ゲーム機等の各種のコンピュータ装置が、ゲーム端末10に含まれる。ゲーム端末10は、各種のコンピュータソフトウェアを実装することにより、サーバ30が提供する種々のサービスをユーザに享受させることが可能である。なお、以下においては、ゲーム端末10がモバイル端末装置55である例を主に説明する。 The game terminal 10 is a computer device operated by a user. For example, the game terminal 10 includes a stationary or book-type personal computer 54 and a mobile terminal device 55 such as a mobile phone including a smartphone. In addition, the game terminal 10 includes various computer devices such as a stationary home game device, a portable game device, a portable tablet terminal device, and an arcade game machine. By implementing various computer software, the game terminal 10 can allow the user to enjoy various services provided by the server 30. Note that, below, an example in which the game terminal 10 is the mobile terminal device 55 will be mainly explained.
 一例として、ゲームに使用するプログラム及びデータは、ネットワーク50を介してサーバ30がゲーム端末10に送信する。そして、ゲーム端末10は、受信したプログラム及びデータを記憶する。代替的に、ゲーム端末10は、不図示の情報記憶媒体に記憶されたプログラム又はデータを読み取るように構成されていてもよい。この場合、ゲーム端末10は、情報記憶媒体を介してプログラム又はデータを取得してもよい。 As an example, the server 30 transmits the program and data used for the game to the game terminal 10 via the network 50. The game terminal 10 then stores the received program and data. Alternatively, the game terminal 10 may be configured to read a program or data stored in an information storage medium (not shown). In this case, the game terminal 10 may acquire the program or data via an information storage medium.
 ユーザは、ゲーム端末10において様々なゲームをプレイできる。例えば、このゲームは、ゲームオブジェクトを育成する要素を含んでいる。具体的にゲームは、ゲームオブジェクトの育成をシミュレーションするシミュレーションゲームの他、対戦型のトレーディングカードゲーム、音楽ゲーム、ボードゲーム、麻雀ゲーム、RPGゲーム、競馬ゲーム、格闘ゲーム、パズルゲーム、クイズゲーム、及び野球並びにサッカー等のスポーツゲーム等である。また、ゲームオブジェクトは、ゲーム端末10において表示される又は使用される対象である。一例として、ゲームオブジェクトは、ゲームを進行させるためのゲーム処理において使用され、キャラクタ、カード、エフェクト、装備、及びアイテム等が含まれる。 The user can play various games on the game terminal 10. For example, the game includes elements for growing game objects. Specifically, the games include simulation games that simulate the growth of game objects, competitive trading card games, music games, board games, mahjong games, RPG games, horse racing games, fighting games, puzzle games, quiz games, and These include sports games such as baseball and soccer. Furthermore, a game object is an object that is displayed or used on the game terminal 10. As an example, game objects are used in game processing to progress the game, and include characters, cards, effects, equipment, items, and the like.
 以下の説明では、ゲーム端末10において、育成対象となるゲームオブジェクトである育成オブジェクトの育成をシミュレーションするゲームを行う例について主に説明する。また、この例では、育成の素材となるゲームオブジェクトである素材オブジェクトは、人間を模した容姿のアイドルのキャラクタであり、育成オブジェクトが当該キャラクタの複製である。つまり、育成オブジェクトは、同じキャラクタに対して複数存在していてもよい。ただし、育成オブジェクトは、素材オブジェクトであるキャラクタ自体であってもよい。(素材オブジェクトが複製でき、育成オブジェクトは素材オブジェクト及び/又は素材オブジェクトの複製である場合がある。また、素材オブジェクトが複製できず、育成オブジェクトは素材オブジェクトである場合もある。)また、素材オブジェクトが存在せず、育成ゲームを開始すると所定の育成オブジェクトがユーザに付与される構成としてもよい。また、育成オブジェクトは、キャラクタに対応する仮想的なカード等であってもよい。 In the following description, an example will be mainly described in which a game is played on the game terminal 10 in which the training of a training object, which is a game object to be trained, is simulated. Further, in this example, the material object, which is a game object serving as a training material, is an idol character whose appearance imitates a human, and the training object is a copy of the character. That is, a plurality of training objects may exist for the same character. However, the training object may be the character itself, which is a material object. (There are cases where a material object can be duplicated and the breeding object is a material object and/or a copy of a material object. There are also cases where a material object cannot be duplicated and the breeding object is a material object.) It is also possible to adopt a configuration in which there is no training object and a predetermined training object is given to the user when the training game is started. Furthermore, the breeding object may be a virtual card or the like corresponding to a character.
 [ゲームの概要]
 図2及び図3を参照してゲームの概要を説明する。ゲームには、育成オブジェクトを育成する育成パートがあり、育成パートは複数のセクションに分けられている。また、各セクションは、複数のターンによって構成されており、最後のターンには育成オブジェクトがライブを行うライブパートがある。各セクションに含まれるターンの数は、育成オブジェクトの素材となるキャラクタ、又は進行中のシナリオ等によって異なる。さらに、各ターンでは、育成オブジェクトに所定の行動を行わせることができる。また、育成パートでは、各ターン内の適宜のタイミングで育成オブジェクトを対象とするイベントが発生する。例えば、当該イベントは、友人と遊ぶイベント、又は合宿を行うイベント等である。なお、一ターンの中で、複数のイベントが発生することもあるが、イベントが発生しないこともある。
[Game overview]
An overview of the game will be explained with reference to FIGS. 2 and 3. The game has a training part in which training objects are trained, and the training part is divided into multiple sections. Furthermore, each section is composed of a plurality of turns, and the last turn has a live part in which the breeding object performs live. The number of turns included in each section varies depending on the character that is the material of the training object, the ongoing scenario, etc. Furthermore, in each turn, the nurturing object can be made to perform a predetermined action. In addition, in the training part, an event targeting the training object occurs at an appropriate timing within each turn. For example, the event is an event for playing with friends, an event for a training camp, or the like. Note that although multiple events may occur in one turn, there may be times when no event occurs.
 また、育成パートを行う前に、ゲームオブジェクトである1又は複数のイベントオブジェクトからなるデッキを構成できる。デッキを構成すると、育成パートにおいて所定の確率でサポートイベントが発生する。イベントオブジェクトは、一例として素材オブジェクトであるキャラクタに対応している。例えば、イベントオブジェクトは、キャラクタが描かれた仮想的なカードである。そして、デッキを構成することによって発生するサポートイベントは、イベントオブジェクトに関連付けられたものである。例えば、歌手のキャラクタのイベントオブジェクトには、歌唱力を上昇させるサポートイベントが関連付けられており、歌唱力を上昇させるレッスンにおいて所定の確率でサポートイベントが発生する。 Furthermore, before performing the training part, it is possible to construct a deck consisting of one or more event objects that are game objects. When you configure a deck, a support event will occur with a predetermined probability during the training part. The event object corresponds to, for example, a character that is a material object. For example, an event object is a virtual card on which a character is drawn. A support event that occurs by configuring a deck is associated with an event object. For example, an event object of a singer character is associated with a support event that increases singing ability, and the support event occurs with a predetermined probability in a lesson that increases singing ability.
 育成パートにおいて、ユーザは、様々な行動を行わせて育成オブジェクトを育成する。すなわち、ユーザの指示に応じて育成オブジェクトが様々な行動を行う演出がなされ、結果として育成オブジェクトのパラメータが変動する。一例として、育成オブジェクトが行う行動は、レッスン、お仕事、休憩、外出、通院、及びスキル獲得等である。ただし、育成オブジェクトが行う行動は、育成オブジェクトに関連付けられているパラメータの変動、能力等の獲得若しくは喪失、アイテム等の取得又は使用、及び他のゲームオブジェクトとの関係性の変化等の効果を生じる選択肢であればよい。例えば、育成オブジェクトが行うその他の行動として、育成オブジェクトは、合宿、旅行、ライブ、取材、撮影、出演、コンクール、オーディション、及び登校等を行うことができてもよい。 In the training part, the user trains the training object by having it perform various actions. That is, the nurturing object performs various actions in response to the user's instructions, and as a result, the parameters of the nurturing object change. As an example, the actions that the training object performs include lessons, work, rest, going out, going to the hospital, and acquiring skills. However, the actions taken by the training object will have effects such as changes in parameters associated with the training object, acquisition or loss of abilities, etc., acquisition or use of items, etc., and changes in relationships with other game objects. It's fine if it's an option. For example, as other actions performed by the nurturing object, the nurturing object may be able to perform training camps, trips, live performances, reporting, photography, appearances, competitions, auditions, going to school, and the like.
 育成オブジェクトのパラメータは、育成オブジェクトを一意に識別するオブジェクト識別情報に紐付けられた変数であり、ゲームの進行に応じて変動する。一例として、パラメータは、能力の大きさ又は高さの程度を示す情報、能力の有無を示す情報、及びゲームオブジェクトの状態を示す情報である。大きさ又は高さの程度を示す場合、パラメータの値が増減することによって、パラメータが変動する。また、能力の有無を示す場合、フラグのオン・オフによってパラメータが変動する。具体的に、パラメータは、歌唱力、ダンス力、表現力、ビジュアル(例えば、服装、化粧、又は髪型等によって上昇する値)、演技力、演奏力、精神力、スタミナ、賢さ、及び魅力等を含む。さらに、行動を行うことによって、育成を有利に進行させるスキルを獲得するために消費する消費要素が取得できてもよい。例えば、消費要素は、スキルポイント、及びゲーム内通貨等を含む。また、行動を行うことによって取得できるポイントは、スキル獲得に関してユーザが有利になるような値であってもよい。例えば、当該ポイントは、スキルを獲得する際に必要なスキルポイントを減少させる効果を与える値等である。 The parameters of the breeding object are variables linked to object identification information that uniquely identifies the breeding object, and change as the game progresses. For example, the parameters include information indicating the size or height of the ability, information indicating the presence or absence of the ability, and information indicating the state of the game object. When indicating the degree of size or height, the parameter changes as the value of the parameter increases or decreases. In addition, when indicating the presence or absence of ability, parameters vary depending on whether the flag is turned on or off. Specifically, the parameters include singing ability, dancing ability, expressive ability, visual ability (for example, values that increase depending on clothing, makeup, hairstyle, etc.), acting ability, performance ability, mental strength, stamina, intelligence, charm, etc. including. Furthermore, by performing an action, it may be possible to acquire consumption elements that are consumed in order to acquire skills that advantageously progress the training. For example, consumption elements include skill points, in-game currency, and the like. Further, the points that can be acquired by performing an action may be a value that gives the user an advantage in acquiring skills. For example, the points are values that give an effect of reducing skill points required when acquiring a skill.
 また、各ターンでは、ゲームオブジェクト同士の友情を描いたドラマ等の所定のシナリオに沿ったイベントが発生してもよい。さらに、育成オブジェクトの育成に影響を与えるイベントは、各ターンにおいて所定の確率で発生してもよい。一例として、育成オブジェクトがレッスンを行っている途中に、友人がレッスンに協力するサポートイベントが所定の確率で発生する。このサポートイベントが発生すると、育成に有利な又は不利な効果を得ることができる。例えば、有利な効果は、レッスンによるパラメータの上昇量の増加、又は育成オブジェクトのパラメータの上昇である。一方、不利な効果は、例えば、パラメータの増加量の減少、又は育成オブジェクトのパラメータの低下である。 Further, in each turn, an event may occur according to a predetermined scenario, such as a drama depicting friendship between game objects. Furthermore, an event that affects the cultivation of the cultivation object may occur at a predetermined probability in each turn. As an example, while the training object is giving a lesson, a support event in which a friend cooperates with the lesson occurs with a predetermined probability. When this support event occurs, it is possible to obtain an advantageous or disadvantageous effect on training. For example, a beneficial effect is an increase in the amount of increase in a parameter due to a lesson, or an increase in a parameter of a training object. On the other hand, the disadvantageous effect is, for example, a decrease in the amount of increase in the parameter or a decrease in the parameter of the nurturing object.
 また、ライブパートは、育成の程度を確認するチェックポイントとして機能する。ライブパートは、各セクションの最後のターンに開催される。例えば、当該ライブパートでは、ライブが自動的に進行するミニゲームが行われる。このミニゲームでは、育成オブジェクト単独、又は育成オブジェクトを含む複数のゲームオブジェクトからなるユニットが歌唱するライブが、演出動画として表示される。そして、育成した育成オブジェクトのパラメータ等に応じて、ライブの成功条件を達成できる。例えば、成功条件は、歌唱力が所定値を超えていること、又はファン数若しくはチケット売上数が所定数を超えていること等であってもよいし、ライブを開催すること自体であってもよい。成功条件を達成すると育成を続行できるが、成功条件を達成できないと育成が終了する。なお、ライブパートは、上述の如く各セクションの最後のターンに開催されるものに限られず、育成パートの最後にのみ開催されるもの、または育成パートとは異なるパートにて(すなわち育成パートの一部ではなく)開催されるものであってもよい。 Additionally, the live part functions as a checkpoint to confirm the level of development. The live part will be held during the last turn of each section. For example, in the live part, a mini-game is played in which the live progresses automatically. In this mini-game, a live performance sung by a single breeding object or a unit made up of a plurality of game objects including the breeding object is displayed as a performance video. Then, the live success conditions can be achieved according to the parameters of the grown object. For example, the success condition may be that the singing ability exceeds a predetermined value, or that the number of fans or ticket sales exceeds a predetermined number, or it may be the fact that the live performance itself is held. good. If the success conditions are achieved, the training can be continued, but if the success conditions are not achieved, the training ends. Note that the live part is not limited to the one held at the last turn of each section as described above, but can be held only at the end of the training part, or in a part different from the training part (i.e., one of the training parts). It may also be held by a club (not a club).
 基本的に育成オブジェクトが行動を行う毎に一ターンが進行する。すなわち、ユーザは、一ターン毎に育成オブジェクトに行動を行わせる。例えば、お仕事の行動においては、複数のお仕事を含むお仕事ルートを構成するお仕事の中からユーザが選択したお仕事を、育成オブジェクトが実行する。お仕事を実行することによって、育成オブジェクトが成長して、育成オブジェクトに関連付けられているパラメータが変動する。さらに、育成オブジェクトがお仕事を実行することによって、育成オブジェクトが行うことができる新たなお仕事が開放されることもある。一例として、お仕事は、ライブ、取材、撮影、出演、コンクール、及びオーディション等である。 Basically, one turn progresses each time the training object performs an action. That is, the user causes the training object to perform an action every turn. For example, in a work action, a training object executes a job selected by the user from among the jobs constituting a work route including a plurality of jobs. By performing tasks, the breeding object grows and the parameters associated with the breeding object change. Furthermore, when the training object performs a job, a new job that the training object can perform may be opened. For example, work includes live performances, interviews, filming, appearances, competitions, and auditions.
 また、育成オブジェクトに行わせる行動には、ターンを消費しない行動が含まれる。例えば、スキル獲得の行動を行わせてもターンは経過せず、育成オブジェクトに他の行動を行わせることができる。なお、ターンを消費しない行動は、ライブパートのターンにおいて、ライブの開始前に行うことができるようにしてもよい。また、各セクションの最後のターンには、ライブパートとして目標ライブが行われる。なお、ライブが行われるための開催条件が別途設定されていてもよい。 Furthermore, the actions that the training object is made to perform include actions that do not consume turns. For example, even if you perform an action to acquire a skill, a turn does not pass, and you can have the training object perform another action. Note that an action that does not consume a turn may be performed in the turn of the live part before the start of the live performance. Also, in the last turn of each section, a target live performance will be performed as a live part. Note that conditions for holding a live performance may be set separately.
 一例として、開催条件は、所定のファン数又はチケット売上数等を獲得することである。ファン数又はチケット売上数は、お仕事などの行動を行わせること又はイベントが発生することによって増加する。また、ライブパートにおけるライブの成功率は、育成パートにおいて育成した育成オブジェクトのパラメータによって増減する。ユーザは、ライブを成功させるために、育成パートにおいてレッスンを行ってパラメータを増加させる必要がある。 As an example, the event condition is to obtain a predetermined number of fans, ticket sales, etc. The number of fans or the number of ticket sales increases by having the user perform an action such as work or by the occurrence of an event. Furthermore, the live success rate in the live part increases or decreases depending on the parameters of the training object grown in the training part. In order to have a successful live performance, the user needs to take lessons in the training part to increase parameters.
 育成終了までのターンの数は任意であるが、一例としてゲーム内での6年に対応する72ターンである。ユーザは、複数ターンからなる育成パートにおいて、少なくとも一つの育成オブジェクトを育成する。そして、育成終了後は、育成が終了した育成オブジェクトを、継承オブジェクトとして利用できる。例えば、任意のキャラクタを育成オブジェクトとするとともに、当該キャラクタよりも前に育成が終了した育成オブジェクトを継承オブジェクトとすることができる。例えば、継承オブジェクトを選択すると、継承オブジェクトに関連付けられたパラメータ、才能、又はお仕事ルート等を育成オブジェクトが引き継げる。 The number of turns until the end of training is arbitrary, but an example is 72 turns, which corresponds to 6 years in the game. The user trains at least one training object in the training part consisting of a plurality of turns. After the training is completed, the trained object can be used as an inherited object. For example, an arbitrary character can be used as a training object, and a training object that has been trained before the character can be used as an inherited object. For example, when an inheritance object is selected, the training object can inherit the parameters, talents, job routes, etc. associated with the inheritance object.
 一例として、上述したゲームは以下のようにして進行する。まず、ユーザは、育成の素材となる複数のゲームオブジェクトの中から育成の対象となる育成オブジェクトを選択する。また、ユーザは、育成オブジェクトに継承させる継承要素を備える継承オブジェクトを選択する。例えば、ユーザは二つの継承オブジェクトを選択する。ただし、選択可能な継承オブジェクトは、一つ又は三つ以上であってもよい。そして、ユーザは、一つ又は複数のイベントオブジェクトを選択して、デッキを構成する。例えば、ユーザは六つのイベントオブジェクトを選択する。ただし、選択可能なイベントオブジェクトは、五つ以下又は七つ以上であってもよい。このデッキに含めるイベントオブジェクトによって、育成パートにおいてサポートイベントが発生してもよい。そして、各サポートイベントは、互いに異なっていてもよく、それぞれが育成オブジェクトの育成に所定の影響を与える。 As an example, the game described above proceeds as follows. First, the user selects a training object to be trained from among a plurality of game objects to be training materials. Further, the user selects an inheritance object that includes an inheritance element to be inherited by the breeding object. For example, a user selects two inherited objects. However, the number of selectable inherited objects may be one or three or more. The user then selects one or more event objects to construct a deck. For example, the user selects six event objects. However, the number of selectable event objects may be five or less or seven or more. Depending on the event object included in this deck, a support event may occur during the training part. The support events may be different from each other, and each has a predetermined influence on the growth of the growth object.
 次に、ユーザがデッキを構成すると、育成パートが開始する。育成パートにおいて、ユーザは、レッスン、お仕事、休憩、通院、及びスキル獲得の中から一つの行動を選択して育成オブジェクトに実行させる。具体的に、育成オブジェクトが悪い状態となった場合(例えば病気にかかった場合又は負傷した場合等)、ユーザは悪い状態を解消するために通院の行動を選択する。また、育成オブジェクトの調子が悪くなった場合、ユーザは不調から好調又は絶好調にするために外出の行動を選択する。なお、育成オブジェクトが悪い状態になっている場合、又は調子が悪い場合には、レッスンの効果が低下するか又はレッスンを選択することができない。レッスンの効果が低下すると、パラメータの上昇量の低下、パラメータの上昇の制限、又はパラメータの低下等が発生する。 Next, when the user configures the deck, the training part begins. In the training part, the user selects one action from among lessons, work, rest, going to the hospital, and acquiring skills, and causes the training object to perform the action. Specifically, when the breeding object is in a bad state (for example, when it is sick or injured), the user selects the action of going to the hospital in order to resolve the bad state. Further, when the condition of the breeding object becomes poor, the user selects the action of going out to make the condition better or better. Note that if the training object is in a bad state or not in good condition, the effect of the lesson will be reduced or the lesson cannot be selected. When the effectiveness of the lesson decreases, the amount of increase in the parameter decreases, the increase in the parameter is limited, or the parameter decreases.
 基本的にユーザが選択した行動を育成オブジェクトに実行させると、一ターンが進行する。また、育成パートにおいては、育成オブジェクトの体力が増減する。基本的には、レッスン又はお仕事を行うと、育成オブジェクトの体力が低下する。そして、体力が所定値よりも低い場合には、レッスンの実行により負傷する確率が上がるか、レッスンの効果が低下するか、又はレッスンを選択することができない。そのため、ユーザは、休憩の行動を選択して、体力を所定の量だけ回復させる。なお、体力は、イベントの発生、又はスキル若しくはアイテムの使用等によって回復させることができてもよい。 Basically, one turn progresses when the user causes the training object to perform the action selected. In addition, in the training part, the physical strength of the training object increases or decreases. Basically, when you do lessons or work, the physical strength of the training object decreases. If the physical strength is lower than a predetermined value, the probability of being injured by performing the lesson increases, the effectiveness of the lesson decreases, or the lesson cannot be selected. Therefore, the user selects a resting action to recover a predetermined amount of physical strength. Note that physical strength may be able to be recovered by the occurrence of an event, the use of a skill or an item, or the like.
 さらに、ユーザは、ライブを開催できるように、所定の開催条件を獲得するためのお仕事の行動を選択する。育成オブジェクトがお仕事を実行することによって、ファン数又はチケット売上数等が増加する。また、ライブパート等を有利に進めるスキルを獲得するために、ユーザは、スキル獲得の行動を選択する。一例として、スキルによって発揮される効果には、イベント又はお仕事を成功させやすくする効果、育成オブジェクトのパラメータを変動させる効果、及びレッスンによるパラメータの上昇量を増やす効果等があってもよい。 Furthermore, the user selects a work action to obtain predetermined conditions for holding a live performance. As the training object performs the job, the number of fans or the number of ticket sales increases. Furthermore, in order to acquire skills that will advantageously advance the live part etc., the user selects an action to acquire skills. As an example, the effects exerted by a skill may include an effect that makes it easier to succeed in an event or job, an effect that changes the parameters of a training object, an effect that increases the amount of increase in parameters due to lessons, and the like.
 また、所定のターン数が経過すると、チェックポイントとしてのライブパートが発生する。そして、育成の結果、成功条件を達成すると育成を続行でき、成功条件を達成できないと育成が終了する。なお、チェックポイントは複数設けられてもよい。育成が終了すると、育成オブジェクトを継承オブジェクトとして使用できるようになる。 Also, after a predetermined number of turns have passed, a live part will occur as a checkpoint. As a result of the training, if the success conditions are achieved, the training can be continued, and if the success conditions are not achieved, the training ends. Note that a plurality of checkpoints may be provided. Once the cultivation is completed, the cultivation object can be used as an inherited object.
 [第1実施形態]
 ゲームシステム100は、ゲームオブジェクトによって曲の演奏又は歌の歌唱がなされる演出が行われ、演奏される曲又は歌唱される歌を出力するゲームを提供する。また、ゲームにおいては、育成対象となるゲームオブジェクトの育成がシミュレーションされる。このゲームシステム100は、図3に示すように、ゲーム端末10とサーバ30とを備えている。なお、以下においては、ゲームオブジェクトによって歌の歌唱がなされる演出が行われ、歌唱される歌を出力するゲームの例を主に説明する。
[First embodiment]
The game system 100 provides a game in which a game object plays a song or sings a song, and outputs a song to be played or a song to be sung. Furthermore, in the game, the raising of a game object to be raised is simulated. This game system 100 includes a game terminal 10 and a server 30, as shown in FIG. In the following, an example of a game in which a game object sings a song and outputs the sung song will be mainly described.
 ゲーム端末10は、端末制御手段の一例としての端末制御部11、端末記憶手段の一例としての端末記憶部12、端末通信手段の一例としての端末通信部13、操作手段の一例としての端末操作部14、表示手段の一例としての端末表示部15、及び音声出力手段の一例としての音声出力部16を有する。一例として、端末制御部11は、コンピュータとして構成されており、不図示のプロセッサを有している。このプロセッサは、例えばCPU(Central Processing Unit)、又はMPU(Micro-Processing Unit)である。また、プロセッサは、端末記憶部12に記憶された制御プログラム及びゲームプログラムに基づいて、ゲーム端末10の全体を制御すると共に、各種処理についても統括的に制御する。 The game terminal 10 includes a terminal control section 11 as an example of a terminal control means, a terminal storage section 12 as an example of a terminal storage means, a terminal communication section 13 as an example of a terminal communication means, and a terminal operation section as an example of an operation means. 14, a terminal display section 15 as an example of a display means, and an audio output section 16 as an example of an audio output means. As an example, the terminal control unit 11 is configured as a computer and includes a processor (not shown). This processor is, for example, a CPU (Central Processing Unit) or an MPU (Micro-Processing Unit). Further, the processor controls the entire game terminal 10 based on the control program and game program stored in the terminal storage unit 12, and also controls various processes in an integrated manner.
 端末記憶部12は、コンピュータ読取可能な非一時的記憶媒体である。具体的に、端末記憶部12は、プロセッサが動作するためのシステムワークメモリであるRAM(Random Access Memory)、並びにプログラム及びシステムソフトウェアを格納するROM(Read Only Memory)、HDD(Hard Disc Drive)及びSSD(Solid State Drive)等の記憶装置を含む。本実施形態では、端末制御部11のCPUが、端末記憶部12のROM又はHDDに記憶された制御プログラムに従って、種々の演算、制御、及び判別等の処理動作を実行する。代替的に、端末制御部11は、CD(Compact Disc)、DVD(Digital Versatile Disc)、CF(Compact Flash)カード、及びUSB(Universal Serial Bus)メモリ等の可搬記録媒体、又はインターネット上のサーバ等の外部記憶媒体に記憶された制御プログラムに従って制御を行うこともできる。 The terminal storage unit 12 is a computer-readable non-temporary storage medium. Specifically, the terminal storage unit 12 includes RAM (Random Access Memory), which is a system work memory for the processor to operate, ROM (Read Only Memory), HDD (Hard Disc Drive), and HDD (Hard Disc Drive) that store programs and system software. Includes storage devices such as SSD (Solid State Drive). In this embodiment, the CPU of the terminal control unit 11 executes processing operations such as various calculations, controls, and determinations according to a control program stored in the ROM or HDD of the terminal storage unit 12. Alternatively, the terminal control unit 11 uses a portable recording medium such as a CD (Compact Disc), a DVD (Digital Versatile Disc), a CF (Compact Flash) card, and a USB (Universal Serial Bus) memory, or a server on the Internet. Control can also be performed according to a control program stored in an external storage medium such as .
 また、端末記憶部12は、ゲームプログラムの一例である端末プログラムPGと、オブジェクトデータ12Aと、端末音声データ12Bとを記憶している。オブジェクトデータ12Aは、ゲームオブジェクトのデータとして、オブジェクトを一意に識別するオブジェクト識別情報と関連付けられた、キャラクタの画像、パラメータの値、及びオブジェクトの状態を示す情報等を含んでいる。また、端末音声データ12Bは、キャラクタの音声、及び歌唱に係る音データを含んでいる。一例として、端末音声データ12Bは、WAV形式等の所定の形式の波形データである。 Additionally, the terminal storage unit 12 stores a terminal program PG, which is an example of a game program, object data 12A, and terminal audio data 12B. The object data 12A includes, as game object data, a character image, parameter values, information indicating the state of the object, etc., which are associated with object identification information that uniquely identifies the object. Further, the terminal audio data 12B includes the character's voice and sound data related to singing. As an example, the terminal audio data 12B is waveform data in a predetermined format such as WAV format.
 さらに、端末記憶部12は、ゲーム画像及びゲーム音楽等のゲームを進行させるゲーム処理に必要なデータ(不図示)を記憶している。端末プログラムPGは、コンピュータとしての端末制御部11を、状態取得手段の一例である状態取得部11A、データ取得手段の一例であるデータ取得部11B、音声出力制御手段の一例であるゲーム進行部11C、及び生成手段の一例である生成部11Dとして機能させる。すなわち、端末制御部11は、ハードウエアとソフトウエアとの組み合わせによって実現される論理的装置として、各部を有している。代替的に、端末プログラムPGは、端末記憶部12以外に、コンピュータ読み取り可能な他の非一時的記憶媒体に記憶させることもできる。 Further, the terminal storage unit 12 stores data (not shown) necessary for game processing to advance the game, such as game images and game music. The terminal program PG includes a terminal control unit 11 as a computer, a status acquisition unit 11A which is an example of a status acquisition unit, a data acquisition unit 11B which is an example of a data acquisition unit, and a game progress unit 11C which is an example of an audio output control unit. , and a generation unit 11D which is an example of a generation means. That is, the terminal control section 11 has each section as a logical device realized by a combination of hardware and software. Alternatively, the terminal program PG can be stored in other computer-readable non-transitory storage medium in addition to the terminal storage unit 12.
 端末操作部14は、ユーザがゲーム操作を入力する入力装置である。また、端末表示部15は、ゲーム画像を表示させる装置であり、例えば、液晶ディスプレイ又は有機ELディスプレイ等である。音声出力部16は、ゲーム音楽等を出力する出力装置であり、例えば、スピーカ又はヘッドホン等である。なお、図3においては、端末操作部14と端末表示部15とが別個に示されている。ただし、端末操作部14と端末表示部15とは、タッチパネルとして一体的に構成されてもよい。また、端末操作部14は、端末表示部15と一体ではないタッチパッド、マウス等のポインティングデバイス、ボタン、キー、レバー、又はスティック等を含んでいてもよい。また、端末操作部14は、ユーザが発する音声又はユーザの動作を検出して、検出結果に応じた操作を行う装置であってもよい。 The terminal operation unit 14 is an input device through which the user inputs game operations. Further, the terminal display unit 15 is a device that displays game images, and is, for example, a liquid crystal display or an organic EL display. The audio output unit 16 is an output device that outputs game music and the like, and is, for example, a speaker or headphones. Note that in FIG. 3, the terminal operation section 14 and the terminal display section 15 are shown separately. However, the terminal operation section 14 and the terminal display section 15 may be integrally configured as a touch panel. Further, the terminal operating section 14 may include a touch pad, a pointing device such as a mouse, a button, a key, a lever, a stick, etc. that are not integrated with the terminal display section 15. Further, the terminal operation unit 14 may be a device that detects the voice emitted by the user or the user's motion, and performs an operation according to the detection result.
 ゲームシステム100が提供するゲームでは、育成オブジェクトの状態に応じた出力音データに基づく音声が出力される。例えば、パラメータである歌唱力が上昇する前の、ゲームの序盤では、下手な歌の出力音データに基づく音声が音声出力部16から出力される。一方、歌唱力が上昇した後のゲームの終盤では、上手な歌の出力音データに基づく音声が音声出力部16から出力される。これにより、ユーザは、育成オブジェクトの成長を、聴覚的に感得できる。 In the game provided by the game system 100, audio is output based on output sound data according to the state of the breeding object. For example, in the early stages of the game before the singing ability, which is a parameter, increases, the audio output unit 16 outputs audio based on the output sound data of a poor singer. On the other hand, in the final stage of the game after the singing ability has increased, the audio output unit 16 outputs audio based on the output sound data of the singing skill. Thereby, the user can audibly sense the growth of the nurturing object.
 具体例として、育成によって歌が上達すると、育成オブジェクトの状態は、劣った状態から、通常の状態を経て、優れた状態へと変化する。一例として、劣った状態では、声域が狭く、高音又は低音の音高が元の歌と異なる下手な歌が出力される。また、劣った状態では、音高が該声域よりも高い又は低い場合に、ピッチ又は音高が元の歌と異なり声が上ずるか又は音程が低くフラットな下手な歌が出力されてもよい。また、劣った状態では、歌の歌い出し又はソロで歌う部分が、元の歌よりも速く又は遅い下手な歌が出力されてもよい。さらに、劣った状態では、音高が大きく変わる部分、又は音高が階段状に変わる部分において、元の歌と音高が大きく異なる下手な歌が出力されてもよい。すなわち、一例として、劣った状態は、元の歌と比較して音楽的な要素について少なくとも一部が劣っている状態であればよい。 As a specific example, when the singing improves through training, the state of the training object changes from a poor state to a normal state to an excellent state. As an example, in a poor state, a poor song is output with a narrow vocal range and a different pitch of high or low notes from the original song. In addition, in a poor state, if the pitch is higher or lower than the vocal range, a poor song may be output in which the pitch or pitch is different from the original song and the voice is raised or the pitch is low and flat. . Furthermore, in a poor state, a poor song may be output in which the beginning of the song or the solo singing part is faster or slower than the original song. Furthermore, in a poor state, a poorly written song may be output in which the pitch is significantly different from the original song in a part where the pitch changes greatly or in a part where the pitch changes stepwise. That is, as an example, the inferior state may be a state in which at least some of the musical elements are inferior compared to the original song.
 また、通常の状態では、声域がやや広く、最高音又は最低音の音高が元の歌と僅かに異なる普通の歌が出力される。また、通常の状態では、音高が該声域よりも高い又は低い場合に、ピッチ又は音高が元の歌と異なり声が上ずるか又は音程が僅かに低くフラットな普通の歌が出力されてもよい。また、通常の状態では、歌の歌い出し又はソロで歌う部分が、元の歌よりも僅かに速く又は遅い普通の歌が出力されてもよい。さらに、通常の状態では、音高が大きく変わる部分、又は音高が階段状に変わる部分において、元の歌と音高が僅かに異なる普通の歌が出力されてもよい。すなわち、一例として、通常の状態は、元の歌と音楽的な要素が同程度の状態であればよい。 In addition, under normal conditions, a normal song with a slightly wider vocal range and a slightly different pitch of the highest or lowest note from the original song is output. In addition, under normal conditions, if the pitch is higher or lower than the vocal range, the pitch or pitch is different from the original song, and the voice is raised or a normal song with a slightly lower and flat pitch is output. Good too. Further, in a normal state, a normal song may be output in which the beginning of the song or the solo singing part is slightly faster or slower than the original song. Furthermore, in a normal state, a normal song may be output that has a slightly different pitch from the original song in a part where the pitch changes greatly or in a part where the pitch changes stepwise. That is, as an example, the normal state may be a state in which the musical elements are comparable to those of the original song.
 さらに、優れた状態では、声域が広く、演出部分などを除いて元の歌と同じ波形である上手い歌が出力される。また、優れた状態では、ピッチ若しくは音高、又は歌のタイミング等が、演出部分などを除いて元の歌と同じである上手い歌が出力されてもよい。さらに、優れた状態では、歌唱テクニックを反映させた演出部分を含む上手い歌が出力されてもよい。すなわち、一例として、優れた状態は、元の歌と比較して音楽的な要素について少なくとも一部が優れている状態であればよい。 Furthermore, in an excellent condition, a good song with a wide vocal range and the same waveform as the original song except for the production part is output. In addition, in an excellent state, a good song may be output in which the pitch, pitch, timing, etc. of the song are the same as the original song except for the production part. Furthermore, in an excellent state, a good song may be output that includes a performance part that reflects the singing technique. That is, as an example, the superior state may be a state in which at least some of the musical elements are superior compared to the original song.
 [音データ]
 出力音データは、音声出力部16から出力される歌に対応する音データであり、最終的に音声出力部16から歌を出力するために用いられる。例えば、出力音データは、端末記憶部12が記憶している端末音声データ12Bに含まれる複数種類の音データの中から選択される。代替的に、複数種類の音データは、サーバ30のサーバ記憶部32が記憶していてもよい。また、出力音データは、必要に応じてゲーム端末10又はサーバ30において都度生成されてもよい。
[Sound data]
The output sound data is sound data corresponding to the song outputted from the audio output section 16, and is used to finally output the song from the audio output section 16. For example, the output sound data is selected from among multiple types of sound data included in the terminal audio data 12B stored in the terminal storage unit 12. Alternatively, multiple types of sound data may be stored in the server storage unit 32 of the server 30. Further, the output sound data may be generated each time in the game terminal 10 or the server 30 as necessary.
 一例として、複数種類の音データは、劣った状態に対応して劣等部分が多い第一音データと、通常の状態に対応して劣等部分が少ない第二音データと、優れた状態に対応して劣等部分がなく演出部分を含む第三音データとを含んでいる。これらの第一から第三音データは、出力音データの候補として生成されている。また、第一から第三音データは、同じ曲に基づいている。そのため、第一から第三音データは、少なくとも一部(例えば一部の小節)が同じ波形である。 As an example, multiple types of sound data include first sound data that corresponds to an inferior state and has many inferior parts, second sound data that corresponds to a normal state and has few inferior parts, and second sound data that corresponds to an excellent state. It contains the third tone data, which does not have inferior parts and includes a directed part. These first to third sound data are generated as candidates for output sound data. Moreover, the first to third tone data are based on the same song. Therefore, at least a portion (for example, a portion of bars) of the first to third tone data has the same waveform.
 また、第一音データ及び第二音データの劣等部分は、第三音データの同じ歌詞の部分と比べて音高等が異なっている。例えば、「あ」という歌詞の劣等部分を用いて出力される音は、基準音データの「あ」という歌詞の部分と比べて、ユーザに違和感を与えるような劣化した音となる。一例として、劣化した音とは、音高が高い又は低い音、タイミングが早い又は遅い音、歌詞が間違っている又は飛ばしている音、声が小さい又は大きい音、声が上ずっている音、又は声がかすれている音等である。 Furthermore, the inferior parts of the first note data and the second note data have different pitches compared to the same lyrics part of the third note data. For example, the sound output using the inferior part of the lyric "a" is a degraded sound that gives the user a sense of discomfort compared to the part of the lyric "a" in the reference sound data. For example, degraded sounds include sounds with a high or low pitch, sounds with early or late timing, sounds with incorrect or skipped lyrics, sounds with a low or loud voice, sounds with a raised voice, or The sound is hoarse, etc.
 また、第三音データの演出部分には、歌唱テクニックを反映する演出が施されている。例えば、第三音データが歌である場合、演出部分には、ビブラート等の歌唱テクニックを反映する演出が施されている。他の例として、演出部分には、「スタッカート」、「しゃくり」、「フォール」、又は「こぶし」等による演出が施されていてもよい。 In addition, the production part of the third note data has a production that reflects the singing technique. For example, when the third note data is a song, the performance portion includes a performance that reflects a singing technique such as vibrato. As another example, the performance portion may include a "staccato", "shakuri", "fall", "fist", or the like.
 他の例として、出力音データは、楽譜通りに歌った歌に対応する基準音データと、基準音データと比較して劣っている劣等部分を含む劣等音データと、基準音データと比較して優れている演出部分を含む優等音データとを用いて生成されてもよい。この場合、出力音データは、基準音データ、劣等音データ、及び優等音データの少なくとも二種類をミキシングすることによって生成される。なお、概略的に説明すると、第一音データ、第二音データ、又は第三音データは、出力音データとして用いられる。そして、第一の例では、第一から第三音データが、生成用データ及び楽譜データに基づいて生成される。また、第二の例では、第一から第三音データが、劣等音データ、基準音データ、及び優等音データに基づいて生成される。ここで、劣等音データ、基準音データ、優等音データは、いずれも生成用データ及び楽譜データに基づいて生成されている。第三の例では、劣等音データ、基準音データ、及び優等音データが、それぞれ第一から第三音データとして用いられる。 As another example, the output sound data may include reference sound data corresponding to a song sung according to the musical score, inferior sound data that includes an inferior part that is inferior compared to the reference sound data, and inferior sound data that is inferior to the reference sound data. It may be generated using superior sound data including superior performance parts. In this case, the output sound data is generated by mixing at least two types of reference sound data, inferior sound data, and superior sound data. In addition, to explain roughly, the first sound data, the second sound data, or the third sound data are used as output sound data. In the first example, the first to third note data are generated based on the generation data and the musical score data. In the second example, the first to third tone data are generated based on the inferior tone data, the reference tone data, and the superior tone data. Here, the inferior tone data, reference tone data, and superior tone data are all generated based on generation data and musical score data. In the third example, inferior tone data, reference tone data, and superior tone data are used as first to third tone data, respectively.
 なお、複数種類の音データは、劣等音データと、基準音データと、優等音データとであってもよい。この場合、劣った状態に対応する第一音データが劣等音データとなり、通常の状態に対応する第二音データが基準音データとなり、優れた状態に対応する第三音データが優等音データとなる。さらに他の例として、複数種類の音データは、パラメータが高くなるにつれて劣等部分を含む割合が低くなるような、複数の段階に分かれていてもよい。また、複数種類の音データのぞれぞれは、ゲームオブジェクトであるキャラクタ毎に生成されていてもよい。 Note that the plurality of types of sound data may be inferior sound data, reference sound data, and superior sound data. In this case, the first tone data corresponding to the inferior condition becomes the inferior tone data, the second tone data corresponding to the normal condition becomes the reference tone data, and the third tone data corresponding to the superior condition becomes the superior tone data. Become. As yet another example, the plurality of types of sound data may be divided into a plurality of stages, such that as the parameter becomes higher, the proportion of inferior parts is lowered. Moreover, each of the plurality of types of sound data may be generated for each character that is a game object.
 [生成手段]
 生成部11Dは、出力音データを生成する。一例として、生成部11Dは、ゲームオブジェクトのキャラクタを演じる人間の演者(例えば、アイドル又は声優等)の歌唱の特徴を収めた生成用データを用いて、出力音データを生成する。この生成用データは、演者が歌う複数(例えば三曲)の歌を録音した元データから作成される。そして、機械学習によって作成されるAI(Artificial Intelligence)である音声創作ソフトによって、生成用データと所望の歌の楽譜データとを用いて所望の歌の出力音データが生成される。そのため、録音した歌とは異なる歌の出力音データを生成することもできる。ここで、生成用データは、演者の歌唱の特徴(例えば、息継ぎのタイミング及び音高のズレの程度等)を再現するために用いられる。そのため、異なる生成用データを用いて出力音データを生成した場合、同じ楽譜データを用いた歌であっても、特徴の異なる歌、すなわち出力音データが生成される。
[Generation means]
The generation unit 11D generates output sound data. As an example, the generation unit 11D generates output sound data using generation data that includes singing characteristics of a human performer (for example, an idol or a voice actor) who plays the character of the game object. This generation data is created from original data of a plurality of (for example, three) songs sung by a performer. Then, output sound data of the desired song is generated using the generation data and the musical score data of the desired song by voice creation software that is AI (Artificial Intelligence) created by machine learning. Therefore, it is also possible to generate output sound data of a song different from the recorded song. Here, the generation data is used to reproduce the characteristics of the performer's singing (for example, the timing of breaths, the degree of pitch deviation, etc.). Therefore, when output sound data is generated using different generation data, songs with different characteristics, ie, output sound data, are generated even if the songs use the same score data.
 例えば、生成部11Dは、状態取得部11Aが取得した育成オブジェクトの状態情報(例えばパラメータ)によって示される状態に応じた出力音データを生成する。そのために、端末記憶部12は、生成用データと音声創作ソフトを記憶している。なお、音声創作ソフトは、サーバ30から予めダウンロードされている。また、生成用データは、サーバ30において生成されサーバ記憶部32が記憶しており、ゲーム端末10からのダウンロード要求に応じて、サーバ30から送信される。代替的に、サーバ30は、ゲーム端末10からの要求に応じて予め生成用データを送信していてもよい。また、端末記憶部12は、生成用データが示す演者の歌唱の特徴を学習済みの音声創作ソフトを記憶するものであってもよい。 For example, the generation unit 11D generates output sound data according to the state indicated by the state information (for example, parameters) of the breeding object acquired by the state acquisition unit 11A. For this purpose, the terminal storage unit 12 stores generation data and voice creation software. Note that the audio creation software is downloaded from the server 30 in advance. Further, the generation data is generated in the server 30 and stored in the server storage unit 32, and is transmitted from the server 30 in response to a download request from the game terminal 10. Alternatively, the server 30 may transmit the generation data in advance in response to a request from the game terminal 10. The terminal storage unit 12 may also store voice creation software that has learned the characteristics of the performer's singing indicated by the generation data.
 一例として、生成部11Dは、ライブの開始前及びライブ中においてリアルタイムに出力音データを生成する。代替的に、生成部11Dは、状態取得部11Aが状態情報を取得したタイミングで出力音データを生成してもよい。具体的に、通常の状態に対応する出力音データを生成する場合、生成部11Dは、劣等部分が少ない又は劣等部分がない出力音データを生成する。また、優れた状態に対応する出力音データを生成する場合、生成部11Dは、劣等部分がなく演出部分を含む出力音データを生成する。さらに、劣った状態に対応する出力音データを生成する場合、生成部11Dは、劣等部分が多い出力音データを生成する。なお、劣等部分の割合は、パラメータに応じて連続的又は段階的に増減してもよい。また、パラメータに応じた劣等部の割合が規定されたテーブルにしたがって、劣等部分の割合が定められてもよい。 As an example, the generation unit 11D generates output sound data in real time before and during the live performance. Alternatively, the generation unit 11D may generate the output sound data at the timing when the status acquisition unit 11A acquires the status information. Specifically, when generating output sound data corresponding to a normal state, the generation unit 11D generates output sound data with few or no inferior parts. Furthermore, when generating output sound data corresponding to an excellent state, the generation unit 11D generates output sound data that does not have inferior parts and includes a presentation part. Furthermore, when generating output sound data corresponding to an inferior state, the generation unit 11D generates output sound data that has many inferior parts. Note that the proportion of the inferior portion may be increased or decreased continuously or stepwise depending on the parameter. Further, the proportion of the inferior part may be determined according to a table in which the proportion of the inferior part is defined according to the parameter.
 これにより、サーバ30からダウンロードするデータ量を削減できる。また、パラメータに応じて劣等部分の割合を細かく変えることができるため、習熟度の変化の表現度を高めることができる。さらに、ライブ中に、ユーザの操作(例えばアイテムの使用又はスキルの発動)によって育成オブジェクトのパラメータが変動しても、変動したパラメータに応じて劣等部分の割合を変えた出力音データを生成できる。加えて、多種類の出力音データを生成することなく、必要な割合の劣等部分を含む出力音データを生成できる。 Thereby, the amount of data downloaded from the server 30 can be reduced. Furthermore, since the proportion of inferior parts can be finely changed according to the parameters, the degree of expression of changes in proficiency level can be improved. Furthermore, even if the parameters of the training object change due to user operations (for example, use of an item or activation of a skill) during a live performance, output sound data can be generated with the proportion of inferior parts changed according to the changed parameters. In addition, output sound data including a necessary proportion of inferior parts can be generated without generating many types of output sound data.
 また、生成部11Dがリアルタイムで出力音データを生成する場合であって、持久力、高揚度、又は緊張度のような育成キャラクタのライブ中に変動するパラメータ(動的パラメータ)が存在する場合、このパラメータに応じて出力音データを生成してもよい。例えば、生成部11Dは、持久力が高いライブ序盤では劣等部分が少なく、持久力が低下したライブ終盤では劣等部分が多くなるように出力音データを生成する。また、生成部11Dは、楽曲のサビ部分などライブで盛り上がる期間に高揚感が高まり、この期間は劣等部分が少なくなるように出力音データを生成してもよい。また、生成部11Dは、楽曲の歌い出し部分などライブ序盤の緊張度が高い期間に対応する部分では劣等部分が多めになるように出力音データを生成してもよい。或いは、生成部11Dは、これらの動的パラメータの2以上の組み合わせに応じて、劣等部分の量の多寡を決定し、出力音データを生成してもよい。なお、動的パラメータは、歌唱力等の他のパラメータであってもよい。または、アイテムの使用若しくはスキルの発動によって、パラメータがライブ中に変動してもよい。また、育成キャラクタの動的パラメータは、育成により変動するパラメータの一種であってもよいし、育成による変化はしないがライブ中には変動する値であってもよい。 Furthermore, when the generation unit 11D generates output sound data in real time, and there are parameters (dynamic parameters) that change during the live performance of the trained character, such as endurance, altitude, or tension, Output sound data may be generated according to this parameter. For example, the generation unit 11D generates the output sound data so that the inferior parts are small in the beginning of the live performance when the stamina is high, and the inferior parts are large in the final stage of the live performance when the stamina is low. Further, the generation unit 11D may generate the output sound data so that the sense of elation increases during the exciting period of a live performance, such as the chorus of a song, and the inferior portions are reduced during this period. Further, the generation unit 11D may generate the output sound data so that inferior parts are more numerous in parts corresponding to a period of high tension at the beginning of a live performance, such as the beginning of a song. Alternatively, the generation unit 11D may determine the amount of the inferior portion according to a combination of two or more of these dynamic parameters, and generate the output sound data. Note that the dynamic parameter may be other parameters such as singing ability. Alternatively, the parameters may change during the live performance due to the use of an item or the activation of a skill. Further, the dynamic parameter of the trained character may be a type of parameter that changes due to training, or may be a value that does not change due to training but changes during a live performance.
 他の例として、生成部11Dは、劣等音データ等に基づいて、出力音データを生成してもよい。具体的に、生成部11Dは、劣等部分を少なくとも一部に含む一の音データの一例である劣等音データと、一の音データと比較して劣等部分の割合が少ないか、又は劣等部分を含まない少なくとも一つの他の音データの一例である基準音データとをミキシングすることによって、出力音データ(例えば、第一から第三音データ)を生成する。ここで、劣等部分は、他の音データと比較して音の出力タイミング又は音高が異なる。これにより、ミキシングによって劣等部分を含む出力音データを生成できるので、生成処理の負荷を低減できる。なお、生成部11Dは、他の音データとして、さらに優等音データをミキシングに用いることができる。 As another example, the generation unit 11D may generate output sound data based on inferior sound data or the like. Specifically, the generation unit 11D generates inferior sound data, which is an example of the first sound data that includes an inferior part at least in part, and the second sound data, which has a smaller proportion of the inferior part compared to the first sound data, or Output sound data (for example, first to third sound data) is generated by mixing with reference sound data that is an example of at least one other sound data that does not include the sound data. Here, the inferior part has a different sound output timing or pitch compared to other sound data. This allows output sound data including inferior parts to be generated through mixing, thereby reducing the load on the generation process. Note that the generation unit 11D can further use superior sound data for mixing as other sound data.
 例えば、劣等音データ、基準音データ、及び優等音データは、サーバ30において生成用データを用いて生成され、サーバ記憶部32が記憶している。そして、ゲーム端末10からのダウンロード要求に応じて、これらの音データがサーバ30から送信される。代替的に、サーバ30は、ゲーム端末10からの要求に応じて予め劣等音データ、基準音データ、及び優等音データを送信していてもよい。また、劣等音データ、基準音データ、及び優等音データのそれぞれの使用割合は、パラメータに応じて増減してもよい。さらに、これらのデータの使用割合は、予めパラメータ毎に定められていてもよい。 For example, the inferior tone data, the reference tone data, and the superior tone data are generated in the server 30 using generation data, and are stored in the server storage unit 32. These sound data are then transmitted from the server 30 in response to a download request from the game terminal 10. Alternatively, the server 30 may transmit the inferior tone data, the reference tone data, and the superior tone data in advance in response to a request from the game terminal 10. Further, the usage ratio of each of the inferior tone data, the reference tone data, and the superior tone data may be increased or decreased depending on the parameters. Furthermore, the usage ratio of these data may be determined in advance for each parameter.
 より具体的に、図4及び図5を参照して、ミキシングによる出力音データの生成について説明する。図4Aは歌唱力が上昇する前の状態であって、主に育成パートの序盤に取得される出力音データにおける、劣等部分の割合を示す概略図である。図4Bは歌唱力がある程度上昇した状態であって、主に育成パートの中盤に取得される出力音データにおける、劣等部分の割合を示す概略図である。図4Cは歌唱力が上昇した状態であって、主に育成パートの終盤に取得される出力音データにおける、劣等部分の割合を示す概略図である。また、図5Aは、演出部分を含む歌に対応する優等音データを示す概略図である。そして、図5Bは、楽譜通りの歌に対応する基準音データを示す概略図である。また、図5Cは、劣等部分を含む歌に対応する劣等音データを示す概略図である。 More specifically, generation of output sound data by mixing will be described with reference to FIGS. 4 and 5. FIG. 4A is a state before the singing ability increases, and is a schematic diagram showing the proportion of inferior parts in the output sound data acquired mainly at the beginning of the training part. FIG. 4B is a schematic diagram showing the proportion of inferior parts in the output sound data obtained mainly in the middle of the training part when the singing ability has increased to some extent. FIG. 4C is a schematic diagram showing the proportion of inferior parts in the output sound data obtained mainly at the end of the training part in a state where the singing ability has increased. Further, FIG. 5A is a schematic diagram showing honor tone data corresponding to a song including a performance part. FIG. 5B is a schematic diagram showing reference tone data corresponding to a song according to the musical score. Moreover, FIG. 5C is a schematic diagram showing inferior tone data corresponding to a song including an inferior part.
 生成部11Dは、劣等部分を含む図4Aの出力音データ(例えば第一音データ)を、基準音データと劣等音データとを用いて生成する。具体的には、図4Aに示すように、歌詞「うえ」と「きくけこ」の部分に基準音データが使用される。そして、その他の部分に劣等音データが使用される。ここで、劣等音データは、図5Cに示すように、音高が高い方にずれた部分B1と、音高が低い方にずれた部分B2と、音高が高い方にずれて且つタイミングが遅い部分B3とを、劣等部分として含んでいる。 The generation unit 11D generates the output sound data of FIG. 4A (for example, the first sound data) including the inferior part using the reference sound data and the inferior sound data. Specifically, as shown in FIG. 4A, reference sound data is used for the lyrics "Ue" and "Kikukeko". Inferior tone data is used for other parts. Here, as shown in FIG. 5C, the inferior tone data includes a part B1 where the pitch is shifted to a higher side, a part B2 where the pitch is shifted to a lower side, and a part B2 where the pitch is shifted to a higher side and the timing is different. The slow part B3 is included as an inferior part.
 そして、生成部11Dは、歌詞「あい」の部分に劣等音データを使用する。その結果、歌詞「あい」の部分に、劣等部分として、音高が高い方にずれた部分B1を含む出力音データが生成される。また、生成部11Dは、歌詞「おか」の部分に劣等音データを使用する。その結果、歌詞「おか」の部分に、劣等部分として、音高が低い方にずれた部分B2を含む出力音データが生成される。さらに、生成部11Dは、歌詞「さしすせそ」の部分に劣等音データを使用する。その結果、歌詞「さしすせそ」の部分に、劣等部分として、音高が高い方にずれて且つタイミングが遅い部分B3を含む出力音データが生成される。 Then, the generation unit 11D uses the inferior sound data for the part of the lyrics "Ai". As a result, output sound data is generated that includes, as an inferior part, a part B1 with a higher pitch in the part of the lyrics "Ai". Furthermore, the generation unit 11D uses inferior sound data for the part of the lyrics "oka". As a result, output sound data is generated that includes, as an inferior part, a part B2 with a lower pitch in the part of the lyric "Oka". Furthermore, the generation unit 11D uses inferior sound data for the lyrics "Sasisuseseso". As a result, output sound data is generated that includes a portion B3, which is an inferior portion, in which the pitch is shifted toward the higher side and the timing is delayed, in the portion of the lyrics “Sasisuseseso”.
 また、生成部11Dは、劣等部分が少ない図4Bの出力音データ(例えば第二音データ)を、基準音データ及び劣等音データに加えて、優等音データを用いて生成する。具体的には、図4Bに示すように、歌詞「えお」と「すせそ」の部分に優等音データが使用される。そして、歌詞「く」と「さ」の部分に劣等音データが使用され、その他の部分に基準音データが使用される。ここで、優等音データは、図5Aに示すように、ビブラートがかかる部分Pを、演出部分として含んでいる。 Furthermore, the generation unit 11D generates the output sound data of FIG. 4B (for example, second sound data), which has few inferior parts, using superior sound data in addition to the reference sound data and inferior sound data. Specifically, as shown in FIG. 4B, the superior tone data is used for the lyrics "eo" and "suseso". The inferior tone data is used for the "ku" and "sa" parts of the lyrics, and the reference tone data is used for the other parts. Here, as shown in FIG. 5A, the superior sound data includes a portion P to which vibrato is applied as a presentation portion.
 そして、生成部11Dは、歌詞「そ」の部分として、優等音データを使用する。その結果、歌詞「そ」の部分に、演出部分として、ビブラートがかかる部分Pを含む出力音データが生成される。また、生成部11Dは、歌詞「く」の部分に劣等音データを使用する。その結果、歌詞「く」の部分に、劣等部分として、音高が低い方にずれた部分B2を含む出力音データが生成される。さらに、生成部11Dは、歌詞「さ」の部分に劣等音データを使用する。その結果、歌詞「さ」の部分に、劣等部分として、音高が高い方にずれて且つタイミングが遅い部分B3を含む出力音データが生成される。 Then, the generation unit 11D uses the superior tone data as the "so" part of the lyrics. As a result, output sound data is generated that includes a portion P where vibrato is applied as a production portion to the “so” portion of the lyrics. Furthermore, the generation unit 11D uses inferior sound data for the "ku" part of the lyrics. As a result, output sound data is generated that includes, as an inferior part, a part B2 with a lower pitch in the part of the lyrics "ku". Furthermore, the generation unit 11D uses inferior sound data for the "sa" part of the lyrics. As a result, output sound data is generated that includes, as an inferior part, a part B3 in which the pitch is shifted to the higher side and the timing is delayed in the part of the lyric "sa".
 さらに、生成部11Dは、劣等部分がない図4Cの出力音データ(例えば第三音データ)を、基準音データと、優等音データとを用いて生成する。具体的には、図4Cに示すように、歌詞「き」の部分に基準音データが使用され、その他の部分に優等音データが使用される。そして、生成部11Dは、図4Bの出力音データにおいて、歌詞「そ」の部分として、優等音データを使用する。その結果、歌詞「そ」の部分に、演出部分として、ビブラートがかかる部分Pを含む出力音データが生成される。 Furthermore, the generation unit 11D generates the output sound data (for example, third sound data) of FIG. 4C without the inferior part using the reference sound data and the superior sound data. Specifically, as shown in FIG. 4C, the standard tone data is used for the "ki" part of the lyrics, and the superior tone data is used for the other parts. Then, the generation unit 11D uses the superior sound data as the "so" part of the lyrics in the output sound data of FIG. 4B. As a result, output sound data is generated that includes a portion P where vibrato is applied as a production portion to the “so” portion of the lyrics.
 このように、図4Aの出力音データは、歌詞「あい」、「おか」、及び「さしすせそ」の部分に劣等部分を含む。それに対して、図4Bの出力音データは、歌詞「く」及び「さ」の部分に劣等部分を含み、図4Aの出力音データに対して比較的に劣等部分が少ない。また、図4Cの出力音データは、劣等部分を含んでおらず、図4A及び図4Bの出力音データに対して比較的に劣等部分が少ない。 In this way, the output sound data in FIG. 4A includes inferior parts in the lyrics "ai", "oka", and "sashisu seso". On the other hand, the output sound data of FIG. 4B includes inferior parts in the lyrics "ku" and "sa", and has relatively few inferior parts compared to the output sound data of FIG. 4A. Furthermore, the output sound data in FIG. 4C does not include any inferior parts, and has relatively fewer inferior parts compared to the output sound data in FIGS. 4A and 4B.
 上述したミキシングをする場合、生成部11Dは、ライブ開始前に、劣等音データ、基準音データ、及び優等音データのダウンロード要求を、サーバ30へ送信する。そして、生成部11Dは、ライブ開始前及びライブ中においてリアルタイムに、サーバ30からダウンロードした劣等音データ等をミキシングして、状態取得部11Aが取得した状態情報によって示される状態に応じた出力音データを生成する。代替的に、劣等音データ、基準音データ、及び優等音データは、サーバ30からゲーム端末10へ予めダウンロードされていてもよい。そして、生成部11Dは、ライブ開始前及びライブ中においてリアルタイムに、予めダウンロードされた劣等音データ等をミキシングして、状態取得部11Aが取得した状態情報によって示される状態に応じた出力音データを生成する。 When performing the above-mentioned mixing, the generation unit 11D transmits a download request for inferior tone data, reference tone data, and superior tone data to the server 30 before the start of the live performance. Then, the generation unit 11D mixes the inferior tone data etc. downloaded from the server 30 in real time before the start of the live performance and during the live performance, and outputs sound data according to the state indicated by the state information acquired by the state acquisition unit 11A. generate. Alternatively, the inferior tone data, the reference tone data, and the superior tone data may be downloaded from the server 30 to the game terminal 10 in advance. Then, the generation unit 11D mixes the pre-downloaded inferior tone data, etc. in real time before the start of the live performance and during the live performance, and generates output sound data according to the state indicated by the state information obtained by the state acquisition unit 11A. generate.
 これらの場合、劣等音データと、優等音データと、基準音データとは、歌毎に予め用意されている。また、それぞれがキャラクタに対応する複数のゲームオブジェクトを含むユニットが同一の歌を歌う場合には、キャラクタごとに、劣等音データと、優等音データと、基準音データとを用意してもよい。 In these cases, the inferior tone data, superior tone data, and reference tone data are prepared in advance for each song. Further, when a unit including a plurality of game objects each corresponding to a character sings the same song, inferior tone data, superior tone data, and reference tone data may be prepared for each character.
 また、生成部11Dは、生成用データを用いて、劣等音データ、基準音データ、及び優等音データを生成してもよい。生成用データは、サーバ30において生成されサーバ記憶部32が記憶しており、ゲーム端末10からのダウンロード要求に応じて、サーバ30から送信される。例えば、生成部11Dは、予め生成した劣等音データ等をミキシングして、ライブ中においてリアルタイムに、状態取得部11Aが取得した状態情報によって示される状態に応じた出力音データを生成する。 Additionally, the generation unit 11D may generate inferior tone data, reference tone data, and superior tone data using the generation data. The generation data is generated in the server 30 and stored in the server storage unit 32, and is transmitted from the server 30 in response to a download request from the game terminal 10. For example, the generation unit 11D mixes pre-generated inferior sound data and the like to generate output sound data in accordance with the state indicated by the state information acquired by the state acquisition unit 11A in real time during a live performance.
 また、生成部11Dが、ライブ中においてリアルタイムに出力音データを生成する場合には、育成オブジェクトのパラメータに応じた割合の劣等部分を含むように、小節単位又は音符単位で劣等音データ、基準音データ、及び優等音データの一部を含むようにミキシングしてもよい。一例として、生成部11Dは、歌唱力のパラメータが低い場合、劣等音データの使用割合を多くし、劣等音データを主として、その一部の小節を基準音データの一部と切り替える。一方、生成部11Dは、歌唱力のパラメータが高い場合は、劣等音データの使用割合を少なくし、基準音データを主として、その一部の小節を劣等音データの一部と切り替える。 In addition, when the generation unit 11D generates output sound data in real time during a live performance, the inferior sound data and the reference sound are generated in bar units or note units so that the output sound data is generated in real time during a live performance. It may be mixed to include the data and a part of the superior tone data. As an example, when the parameter of singing ability is low, the generation unit 11D increases the usage ratio of the inferior tone data, and mainly uses the inferior tone data and switches some measures thereof to a part of the reference tone data. On the other hand, when the singing ability parameter is high, the generation unit 11D reduces the usage ratio of the inferior tone data, mainly uses the reference tone data, and switches a part of the measure to a part of the inferior tone data.
 なお、生成部11Dは、ライブ中においてリアルタイムに生成せず、ライブ開始前(例えば、状態取得部11Aが状態情報を取得したタイミング)で、生成用データと楽譜データとに基づいて出力音データを生成してもよい。この場合、生成部11Dは、生成した出力音データを端末音声データ12Bに含めて、端末記憶部12に記憶させる。代替的に、生成部11Dは、生成した出力音データをサーバ音声データ32Aに含めて、サーバ記憶部32に記憶させてもよい。なお、劣等部分の割合は、パラメータに応じて連続的又は段階的に増減してもよい。また、パラメータに応じた劣等部の割合が規定されたテーブルにしたがって、劣等部分の割合が定められてもよい。 Note that the generation unit 11D does not generate the output sound data in real time during the live performance, but generates the output sound data based on the generation data and the musical score data before the start of the live performance (for example, at the timing when the status acquisition unit 11A acquires the status information). may be generated. In this case, the generation unit 11D includes the generated output sound data in the terminal audio data 12B and stores it in the terminal storage unit 12. Alternatively, the generation unit 11D may include the generated output sound data in the server audio data 32A and store it in the server storage unit 32. Note that the proportion of the inferior portion may be increased or decreased continuously or stepwise depending on the parameter. Further, the proportion of the inferior part may be determined according to a table in which the proportion of the inferior part is defined according to the parameter.
 さらに、生成部11Dは、出力音データの候補として、複数種類の音データを生成用データから予め生成していてもよい。例えば、生成部11Dは、育成オブジェクトの状態情報(例えばパラメータ)が示す状態毎に、複数種類の音データを予め生成する。一例として、生成部11Dは、歌唱力が低い状態では、大きく音高がずれる劣等部分を含むように出力音データの候補を生成する。また、生成部11Dは、歌唱力が高くなるにつれて、音高のずれる量が小さくなるように出力音データの候補を生成する。そして、生成部11Dは、生成した複数種類の音データを端末音声データ12Bに含めて、端末記憶部12に記憶させる。 Furthermore, the generation unit 11D may generate multiple types of sound data in advance from the generation data as output sound data candidates. For example, the generation unit 11D generates multiple types of sound data in advance for each state indicated by the state information (for example, parameters) of the training object. As an example, when the singing ability is low, the generation unit 11D generates output sound data candidates so as to include an inferior part in which the pitch is significantly shifted. Furthermore, the generation unit 11D generates candidates for output sound data such that as the singing ability increases, the amount of deviation in pitch decreases. Then, the generation unit 11D includes the plurality of types of generated sound data in the terminal audio data 12B, and stores it in the terminal storage unit 12.
 また、生成部11Dは、出力音データの候補として、育成オブジェクトの状態情報に応じて劣等部分の劣化の程度が段階的に異なるように、素材オブジェクト毎に複数種類の音データを生成してもよい。そして、生成部11Dは、生成した複数種類の音データを端末音声データ12Bに含めて、端末記憶部12に記憶させる。例えば、生成部11Dは、歌唱力が所定範囲に含まれる通常のパターンと、歌唱力が所定範囲を超える優れた状態のパターンと、歌唱力が所定範囲を下回る劣った状態のパターンとの、出力音データの候補を予め生成する。 The generation unit 11D may also generate multiple types of sound data for each material object as output sound data candidates so that the degree of deterioration of the inferior part varies in stages according to the state information of the breeding object. good. Then, the generation unit 11D includes the plurality of types of generated sound data in the terminal audio data 12B, and stores it in the terminal storage unit 12. For example, the generation unit 11D outputs a normal pattern in which the singing ability is within a predetermined range, a pattern in an excellent state in which the singing ability exceeds the predetermined range, and a pattern in an inferior state in which the singing ability is below the predetermined range. Sound data candidates are generated in advance.
 また、生成部11Dは、歌中の劣等部分の位置が設定されている楽譜データに従って、出力音データ又は劣等音データ等を生成してもよい。例えば、全てが劣化した音からなる歌は、ユーザが聞き難くなってしまう。そのため、楽譜データにおいては、全てが劣化した音とならないように、劣等部分が歌の所定の部分に対応する位置に設定されている。例えば、歌の歌い出し、歌の歌い終わり、音が高い部分、音が低い部分、又は複雑な歌詞の部分等に対応する位置に、劣等部分が設定されている。生成部11Dは、これらの位置に劣等部分が設けられるように、出力音データ又は劣等音データ等を生成する。つまり、楽譜データは、楽曲に応じて、劣等部分としてよい箇所、演出部分としてよい箇所、劣等部分としてはいけない箇所、又は演出部分としてはいけない箇所が予め定められていてもよい。すなわち、楽曲中の優劣部分にできる箇所または優劣部分にしない箇所が定められていてもよい。 Furthermore, the generation unit 11D may generate output sound data or inferior tone data, etc. according to musical score data in which the position of the inferior part in the song is set. For example, a song made entirely of degraded sounds will be difficult for the user to hear. Therefore, in the musical score data, the inferior part is set at a position corresponding to a predetermined part of the song so that the entire sound does not become deteriorated. For example, inferior parts are set at positions corresponding to the beginning of a song, the end of a song, a high-pitched part, a low-pitched part, a part with complicated lyrics, and the like. The generation unit 11D generates output sound data, inferior sound data, etc. so that inferior parts are provided at these positions. In other words, in the musical score data, parts that may be used as inferior parts, parts that may be used as production parts, parts that should not be used as inferior parts, or parts that should not be used as production parts may be predetermined in accordance with the musical piece. In other words, there may be predetermined locations in the music that can be made into superior or inferior parts or parts that cannot be made into superior or inferior parts.
 また、楽譜データにおいては、劣等部分の内容が設定されていてもよい。劣等部分の内容とは、例えば劣らせる程度及び劣らせる態様等である。具体的に、劣等部分の内容とは、声が小さく若しくは大きくなる程度、又は声がかすれてしまう態様等である。そして、生成部11Dは、楽譜データに設定されている劣等部分の内容に従って、音が劣化するように出力音データを生成する。 Additionally, the content of the inferior part may be set in the musical score data. The contents of the inferior part include, for example, the degree of inferiority and the mode of inferiority. Specifically, the content of the inferior part is the extent to which the voice becomes soft or loud, or the manner in which the voice becomes hoarse. Then, the generation unit 11D generates output sound data such that the sound is degraded according to the contents of the inferior portion set in the musical score data.
 さらに、劣等部分の位置や劣等部分の内容は、ゲームオブジェクトであるキャラクタ毎に設定されていてもよい。例えば、低い声のキャラクタは、音が高い部分で音高が元の歌と比べて低くなってしまうように設定される。また、緊張しやすいキャラクタは、歌の歌い出しで声がかすれてしまうように設定される。また、体力が低いキャラクタは、歌の歌い終わりで声がかすれてしまうように設定される。さらに、劣等部分の内容は、育成オブジェクトのライブ開始前でのパラメータ、又はライブ中に変化する育成オブジェクトのパラメータ等に応じて設定されていてもよい。例えば、体力が低い場合に対応する楽譜データでは、歌の歌い終わりに声がかすれてしまうように設定されている。 Furthermore, the position of the inferior part and the contents of the inferior part may be set for each character, which is a game object. For example, a character with a low voice is set so that the pitch of the high-pitched part is lower than that of the original song. Furthermore, characters who are easily nervous are set so that their voices become hoarse when they start singing. Furthermore, characters with low physical strength are set so that their voices become hoarse at the end of the song. Furthermore, the contents of the inferior part may be set according to the parameters of the training object before the live performance starts, or the parameters of the training object that change during the live performance, or the like. For example, musical score data for a person with low physical strength is set so that the voice becomes hoarse at the end of the song.
 生成部11Dは、ゲームオブジェクトであるキャラクタを複数のタイプに分けて、タイプ毎に劣等部分のパターンが異なる楽譜データを利用してもよい。例えば、生成部11Dは、音高を高い方にずらすパターン、音高を低い方にずらすパターン、タイミングを早くするパターン、及びタイミングを遅くするパターンの楽譜データを利用する。具体的に、キャラクタ毎にパターンが定められており、生成部11Dは、パターンに応じた楽譜データを利用する。例えば、生成部11Dは、キャラクタAに対しては、音高を高い方にずらすパターンの劣等部分が設定された楽譜データに従って、出力音データを生成する。また、生成部11Dは、キャラクタBに対しては、タイミングを早くするパターンの劣等部分が設定された楽譜データに従って出力音データを生成する。これにより、作成される楽譜データの数を削減できる。 The generation unit 11D may divide the characters, which are game objects, into a plurality of types, and use musical score data in which patterns of inferior parts differ for each type. For example, the generation unit 11D uses musical score data of a pattern for shifting the pitch higher, a pattern for shifting the pitch lower, a pattern for accelerating the timing, and a pattern for delaying the timing. Specifically, a pattern is determined for each character, and the generation unit 11D uses musical score data according to the pattern. For example, the generation unit 11D generates output sound data for character A according to musical score data in which an inferior part of a pattern that shifts the pitch to a higher side is set. Further, the generation unit 11D generates output sound data for character B according to the musical score data in which the inferior part of the pattern for accelerating the timing is set. This allows the number of musical score data to be created to be reduced.
 一例として、楽譜データは、サーバ30の管理者、又はゲーム端末10のユーザが作成する。代替的に、楽譜データは、ゲーム端末10又はサーバ30が自動的に生成してもよい。なお、楽譜データの使用は例示であり、劣化させる部分は、予め設定されていなくともよい。例えば、データ取得部11Bは、劣化させる部分をランダムで選び、選んだ部分を劣化させて出力音データを生成してもよい。 As an example, the score data is created by the administrator of the server 30 or the user of the game terminal 10. Alternatively, the musical score data may be automatically generated by the game terminal 10 or the server 30. Note that the use of musical score data is merely an example, and the portion to be degraded does not need to be set in advance. For example, the data acquisition unit 11B may randomly select a portion to be degraded, degrade the selected portion, and generate output sound data.
 さらに、生成部11Dは、育成オブジェクトの状態情報(例えばパラメータ)に応じた劣等部分の割合を示したテーブルを参照して出力音データを生成してもよい。例えば、生成部11Dは、所定値以下の低い歌唱力と、劣等部分の割合又は劣等音データの使用割合と、を関連付けたテーブルを参照する。そして、生成部11Dは、歌唱力が所定値以下であると、低い歌唱力に関連付けられた劣等部分の割合又は劣等音データの使用割合が反映されるように出力音データを生成する。同様に、生成部11Dは、所定値より高い歌唱力と、劣等部分の割合又は劣等音データの使用割合と、を関連付けたテーブルを参照する。そして、生成部11Dは、歌唱力が所定値より高いと、高い歌唱力に関連付けられた劣等部分の割合又は劣等音データの使用割合が反映されるように出力音データを生成する。 Furthermore, the generation unit 11D may generate the output sound data by referring to a table showing the proportion of inferior parts according to the state information (for example, parameters) of the breeding object. For example, the generation unit 11D refers to a table that associates low singing ability below a predetermined value with the ratio of inferior parts or the usage ratio of inferior sound data. Then, when the singing ability is below a predetermined value, the generation unit 11D generates output sound data such that the proportion of the inferior part or the usage proportion of the inferior sound data associated with the low singing ability is reflected. Similarly, the generation unit 11D refers to a table that associates singing ability higher than a predetermined value with the proportion of inferior parts or the proportion of use of inferior sound data. Then, when the singing ability is higher than a predetermined value, the generation unit 11D generates output sound data so as to reflect the proportion of the inferior part or the usage proportion of the inferior sound data associated with the high singing ability.
 なお、生成部11Dは、出力音データの候補として、上手さが異なる歌に対応する複数種類の音データが生成できれば、上述した態様とは異なる態様で出力音データを生成してもよい。例えば、生成部11Dは、人間の演者による文章の発音から音素毎のデータを作成して、当該データに基づいて出力音データ又は劣等音データ等を生成してもよい。 Note that the generation unit 11D may generate the output sound data in a manner different from the manner described above, as long as it can generate multiple types of sound data corresponding to songs of different skill levels as output sound data candidates. For example, the generation unit 11D may create data for each phoneme from the pronunciation of a sentence by a human performer, and generate output sound data, inferior sound data, etc. based on the data.
 また、上述した出力音データは、歌の部分(すなわち歌唱される部分)のみに劣等部分又は演出部分を含む。そして、出力音データ、例えば第一から第三音データにおいて、歌の部分以外の楽器等による曲の部分(すなわち演奏される部分)は、同じ波形である。例えば、第一から第三音データにおける曲の部分は、楽譜通りに演奏された曲に対応しており、劣等部分を含んでいない。 Furthermore, the output sound data described above includes an inferior part or a performance part only in the song part (that is, the sung part). In the output sound data, for example, the first to third sound data, the song parts (that is, the parts played) by musical instruments, etc. other than the singing part have the same waveform. For example, the song parts in the first to third note data correspond to songs played according to the musical score, and do not include inferior parts.
 [状態取得手段]
 状態取得部11Aは、ゲームオブジェクトの状態を示す状態情報を取得する。ゲームオブジェクトは、一例として、育成オブジェクトであり、育成オブジェクトの状態には、劣った状態、優れた状態、及びその他の通常の状態があってもよい。さらに、育成オブジェクトの状態は、複数の段階、例えば高い、やや高い、やや低い、低い等の四つ以上の段階に分かれていてもよい。データ取得部11Bは、状態取得部11Aが取得した状態情報に基づいて、状態に応じた出力音データを取得する。
[Status acquisition means]
The state acquisition unit 11A acquires state information indicating the state of the game object. The game object is, for example, a breeding object, and the states of the breeding object may include a poor state, a superior state, and other normal states. Furthermore, the state of the nurturing object may be divided into a plurality of stages, for example, four or more stages such as high, slightly high, slightly low, and low. The data acquisition unit 11B acquires output sound data according to the state based on the status information acquired by the status acquisition unit 11A.
 具体的に、パラメータ(例えば歌唱力や体力)が所定値よりも低い場合、又は病気若しくはケガ等の状態異常が発生している場合に、データ取得部11Bは、劣った状態に対応する出力音データを取得する。一方、パラメータが所定値よりも高い場合、又は病気若しくはケガ等の状態異常が発生していない場合に、データ取得部11Bは、優れた状態に対応する出力音データを取得する。ここで、所定値は一つでも二つ以上でもよい。すなわち、二つ以上とは、例えば劣った状態か否かを判断するために利用する所定値と、優れた状態か否かを判断するために利用する所定値とを異ならせる、ということである。このように、複数の状態を判断するために複数の所定値を利用してもよい。さらに、データ取得部11Bは、精神力等のパラメータの値が低い場合に、劣った状態に対応する出力音データを取得してもよい。また、データ取得部11Bは、スタミナ又は体力等のパラメータが低い場合に、劣った状態に対応する出力音データを取得してもよい。 Specifically, when a parameter (for example, singing ability or physical strength) is lower than a predetermined value, or when an abnormal condition such as illness or injury occurs, the data acquisition unit 11B generates an output sound corresponding to the inferior condition. Get data. On the other hand, when the parameter is higher than a predetermined value, or when no abnormal condition such as illness or injury has occurred, the data acquisition unit 11B acquires output sound data corresponding to an excellent condition. Here, the predetermined value may be one or two or more. In other words, two or more means, for example, that the predetermined value used to determine whether the condition is inferior or not is different from the predetermined value used to determine whether the condition is excellent. . In this way, multiple predetermined values may be used to determine multiple states. Furthermore, the data acquisition unit 11B may acquire output sound data corresponding to an inferior state when the value of a parameter such as mental strength is low. Furthermore, when a parameter such as stamina or physical strength is low, the data acquisition unit 11B may acquire output sound data corresponding to an inferior state.
 なお、状態情報が示す状態は、パラメータ、行動又はスキルによって変化する状態であればよい。例えば、状態は、低レベル、高レベル、疲労、不調、好調、バフ、及びデバフ等の種々の状況を含む。また、状態情報は、状態を特定するための情報である。具体的に、状態情報は、パラメータの数値、病気又はケガ等のフラグのオン若しくはオフを示す情報、又は状態を一意に識別する状態識別情報等である。例えば、状態情報としてのパラメータによって状態を判断することによって、育成の結果としてのパラメータの増減を出力音データに基づいて出力される歌の上手さに反映できる。また、状態情報が示す状態を状態取得部11Aが判別して、判別された状態に対応する出力音データを、データ取得部11Bが取得してもよい。例えば、状態取得部11Aは、育成オブジェクトに紐付けられたパラメータに基づいて状態を判別する。 Note that the state indicated by the state information may be any state that changes depending on parameters, actions, or skills. For example, the status includes various situations such as low level, high level, fatigue, poor condition, good condition, buff, and debuff. Further, the state information is information for specifying the state. Specifically, the status information is a numerical value of a parameter, information indicating whether a flag such as illness or injury is on or off, or status identification information that uniquely identifies the status. For example, by determining the state based on parameters as state information, an increase or decrease in the parameters as a result of training can be reflected in the singing ability output based on the output sound data. Alternatively, the state acquisition section 11A may determine the state indicated by the state information, and the data acquisition section 11B may obtain output sound data corresponding to the determined state. For example, the state acquisition unit 11A determines the state based on parameters associated with the breeding object.
 一例として、状態取得部11Aは、ライブの開始前に、オブジェクトデータ12Aを参照して、育成オブジェクトの歌唱力のパラメータを取得する。そして、データ取得部11Bは、歌唱力によって示される状態に応じた出力音データを取得する。そして、データ取得部11Bは、取得した歌唱力の値が400以上から600以下の範囲に含まれる場合、通常の状態に対応する出力音データを取得する。また、データ取得部11Bは、取得した歌唱力の値が400未満の場合、劣った状態に対応する出力音データを取得する。さらに、データ取得部11Bは、取得した歌唱力の値が600より高い場合、優れた状態に対応する出力音データを取得する。 As an example, the status acquisition unit 11A refers to the object data 12A and acquires the parameters of the singing ability of the training object before the start of the live performance. The data acquisition unit 11B then acquires output sound data according to the state indicated by the singing ability. Then, the data acquisition unit 11B acquires output sound data corresponding to a normal state when the acquired singing ability value is within the range of 400 or more and 600 or less. Further, when the acquired singing ability value is less than 400, the data acquisition unit 11B acquires output sound data corresponding to an inferior state. Further, when the acquired singing ability value is higher than 600, the data acquisition unit 11B acquires output sound data corresponding to an excellent state.
 また、状態取得部11Aは、ライブパートにおけるライブを行っている間に変動する状態情報を取得してもよい。この場合、データ取得部11Bは、ライブ中に変動するパラメータによって示される育成オブジェクトの状態に対応する出力音データを取得する。例えば、状態取得部11Aは、ライブに関する動的パラメータである、ライブ中に増減するファン数、ライブ全体の盛り上がり度、声援数、又はゲーム空間内でライブが仮想的に配信された場合における視聴者数、投げ銭の金額等を取得する。そして、データ取得部11Bは、取得した値によってライブの盛り上がりの状態に対応する出力音データを取得する。また、育成キャラクタの動的パラメータ(例えば、持久力、高揚度、緊張度等)が増減し、状態取得部11Aが増減したパラメータを取得してもよい。 Additionally, the status acquisition unit 11A may acquire status information that changes while performing a live part. In this case, the data acquisition unit 11B acquires output sound data corresponding to the state of the breeding object indicated by the parameters that change during the live performance. For example, the status acquisition unit 11A may collect dynamic parameters related to the live performance, such as the number of fans that increases or decreases during the live performance, the level of excitement of the entire live performance, the number of cheers, or the number of viewers when the live performance is virtually distributed within the game space. Obtain the number, amount of coins, etc. Then, the data acquisition unit 11B acquires output sound data corresponding to the excitement state of the live performance based on the acquired value. Furthermore, the dynamic parameters (for example, endurance, elevation, tension, etc.) of the trained character may increase or decrease, and the state acquisition unit 11A may acquire the increased or decreased parameters.
 一例として、状態取得部11Aは、ライブ中においてリアルタイムに、オブジェクトデータ12Aを参照して、育成オブジェクトの持久力のパラメータを取得してもよい。データ取得部11Bは、取得した持久力の値が、最高値に対して三分の一の値よりも低い場合、劣った状態に対応する出力音データを取得する。また、データ取得部11Bは、取得した持久力の値が、最高値に対して三分の一以上且つ三分の二以下の範囲に含まれる場合、通常の状態に対応する出力音データを取得する。さらに、データ取得部11Bは、取得した持久力の値が、最高値に対して三分の二の値よりも高い場合、優れた状態に対応する出力音データを取得する。 As an example, the state acquisition unit 11A may refer to the object data 12A in real time during the live performance to acquire the parameters of the endurance of the training object. When the acquired endurance value is lower than one-third of the maximum value, the data acquisition unit 11B acquires output sound data corresponding to an inferior state. Furthermore, when the acquired endurance value is within the range of one-third or more and two-thirds or less of the maximum value, the data acquisition unit 11B acquires output sound data corresponding to the normal state. do. Further, when the acquired endurance value is higher than two-thirds of the maximum value, the data acquisition unit 11B acquires output sound data corresponding to an excellent state.
 上述したようにパラメータは、ゲームの進行に応じて変動する。例えば、ゲームが進行すると高くなるか、又はゲームオブジェクトの状態が悪化すると低くなる。具体的に、ゲームが進行して育成パートのターンが経過すると、育成オブジェクトのパラメータ(例えば歌唱力)の数値は高くなる。また、育成オブジェクトが疲労して状態が悪化すると、育成オブジェクトのパラメータ(例えば体力)の数値は低くなる。代替的に、パラメータは、ゲームが進行すると下がってもよい。 As mentioned above, the parameters change as the game progresses. For example, it increases as the game progresses, or decreases as the state of the game object deteriorates. Specifically, as the game progresses and the turn of the training part passes, the numerical value of the parameter (for example, singing ability) of the training object increases. Furthermore, when the training object becomes fatigued and its condition deteriorates, the numerical value of the parameter (for example, physical strength) of the training object becomes low. Alternatively, the parameters may decrease as the game progresses.
 また、状態取得部11Aは、育成オブジェクトを含む複数のゲームオブジェクトからなるユニットの、各ゲームオブジェクトの状態情報を取得してもよい。この場合、データ取得部11Bは、各ゲームオブジェクトの状態に対応する出力音データを取得する。例えば、データ取得部11Bは、各ゲームオブジェクトの歌唱力の値に基づいて、各ゲームオブジェクトの歌唱の習熟度の状態に対応する出力音データを取得する。また、データ取得部11Bは、各ゲームオブジェクトの体力の値に基づいて、各ゲームオブジェクトの疲労の状態に対応する出力音データを取得してもよい。ユニットのそれぞれの状態情報を取得すれば、ユニットの全員に対してパラメータに応じた出力音データを取得して音声を出力できる。 Additionally, the state acquisition unit 11A may acquire state information of each game object of a unit consisting of a plurality of game objects including the breeding object. In this case, the data acquisition unit 11B acquires output sound data corresponding to the state of each game object. For example, the data acquisition unit 11B acquires output sound data corresponding to the singing proficiency state of each game object based on the singing ability value of each game object. Further, the data acquisition unit 11B may acquire output sound data corresponding to the state of fatigue of each game object based on the physical strength value of each game object. By acquiring status information for each unit, it is possible to acquire output sound data according to parameters and output audio to all members of the unit.
 ただし、状態取得部11Aは、ユニットのライブが行われる場合であっても、育成オブジェクトの状態情報のみを取得してもよい。この場合、育成オブジェクトのみに対して、パラメータに応じた出力音データを取得して音声を出力できる。 However, the status acquisition unit 11A may acquire only the status information of the training object even when the unit is performing live. In this case, it is possible to obtain output sound data according to the parameters and output audio only for the breeding object.
 [データ取得手段]
 データ取得部11Bは、育成オブジェクトが歌唱する歌に対応する音データであって、状態取得部11Aが取得した状態情報が示す状態に応じた出力音データを取得する。例えば、データ取得部11Bは、生成部11Dが状態に応じて生成した出力音データを取得することによって、状態に応じた出力音データを取得する。ここで、データ取得部11Bは、生成部11Dがライブ開始前に予め生成した出力音データを取得してもよく、生成部11Dがライブ中にリアルタイムで生成する出力音データを取得してもよい。そのために、データ取得部11Bは、育成オブジェクトの状態に応じた出力音データを生成部11Dに生成させてもよい。また、データ取得部11Bは、端末記憶部12又はサーバ記憶部32から、状態に応じた出力音データを選択して取得してもよい。
[Data acquisition means]
The data acquisition unit 11B acquires output sound data that corresponds to the song sung by the training object and corresponds to the state indicated by the status information acquired by the status acquisition unit 11A. For example, the data acquisition unit 11B acquires output sound data according to the state by acquiring the output sound data generated by the generation unit 11D according to the state. Here, the data acquisition unit 11B may acquire output sound data generated by the generation unit 11D in advance before the start of the live performance, or may acquire output sound data generated by the generation unit 11D in real time during the live performance. . For this purpose, the data acquisition unit 11B may cause the generation unit 11D to generate output sound data according to the state of the breeding object. Further, the data acquisition unit 11B may select and acquire output sound data according to the state from the terminal storage unit 12 or the server storage unit 32.
 また、劣った状態に対応する出力音データは、状態情報が示す状態に応じた劣等部分を少なくとも一部に含んでいる。具体的に、データ取得部11Bは、状態情報が劣った状態を示す場合には、状態情報が優れた状態を示す場合と比較して、劣等部分の割合が多い出力音データを取得する。そして、データ取得部11Bは、状態情報が優れた状態を示す場合には、劣等部分の割合が少ない又は劣等部分を含まない出力音データを取得する。 Further, the output sound data corresponding to the inferior state includes at least a part of the inferior part according to the state indicated by the state information. Specifically, when the state information indicates an inferior state, the data acquisition unit 11B obtains output sound data having a higher proportion of inferior parts than when the state information indicates an excellent state. Then, when the state information indicates an excellent state, the data acquisition unit 11B acquires output sound data that has a small proportion of inferior parts or does not include inferior parts.
 また、データ取得部11Bは、互いに異なる複数種類の音データの中から、状態情報に応じた出力音データを選択して取得してもよい。例えば、データ取得部11Bは、予め生成されている複数種類(例えば3パターン)の音データの中から、パラメータに応じた出力音データを選択して取得する。具体的に、データ取得部11Bは、歌唱力が所定範囲を下回り状態情報が劣った状態を示す場合には、劣等部分の割合が多い出力音データ(例えば、第一音データ)を選択して取得する。また、データ取得部11Bは、歌唱力が所定範囲に含まれて状態情報が通常の状態を示す場合には、劣等部分の割合が少ない又は劣等部分がない出力音データ(例えば第二音データ)を選択して取得する。そして、データ取得部11Bは、歌唱力が所定範囲を超えて状態情報が優れた状態を示す場合には、劣等部分の割合が少ない又は含まない出力音データ(例えば第三音データ)を選択して取得する。これにより、都度出力音データを生成する処理を省略でき、処理の負担を削減できる。 Furthermore, the data acquisition unit 11B may select and acquire output sound data according to the state information from among a plurality of different types of sound data. For example, the data acquisition unit 11B selects and acquires output sound data according to the parameters from among a plurality of types (for example, three patterns) of sound data generated in advance. Specifically, when the singing ability is below a predetermined range and the status information indicates an inferior state, the data acquisition unit 11B selects output sound data (for example, first sound data) with a high proportion of inferior parts. get. In addition, when the singing ability is included in a predetermined range and the status information indicates a normal state, the data acquisition unit 11B outputs sound data with a small proportion of inferior parts or no inferior parts (for example, second sound data). Select and obtain. Then, when the singing ability exceeds a predetermined range and the status information indicates an excellent state, the data acquisition unit 11B selects output sound data (for example, third note data) that has a small proportion of inferior parts or does not include it. and obtain it. Thereby, the process of generating output sound data each time can be omitted, and the processing load can be reduced.
 すなわち、データ取得部11Bは、育成オブジェクトが優れた状態にある場合、劣等部分を少なくとも一部に含む出力音データを一の音データとしたときの、一の音データと比較して劣等部分の割合が少ないか、又は劣等部分を含まない他の音データを取得する。これにより、劣等部分を含む音声が出力されることが少なくなるか又は出力されず、例えば歌唱力が高い上手な状態、又は体力が高い元気な状態などを、ゲーム中に表現できる。 That is, when the training object is in an excellent state, the data acquisition unit 11B compares the output sound data including the inferior part with the first sound data when the output sound data including at least a part of the inferior part is the first sound data. Acquire other sound data that has a small proportion or does not include inferior parts. As a result, voices containing inferior parts are less likely to be output, or are not output at all, and, for example, it is possible to express a skilled state with high singing ability, a lively state with high physical strength, etc. during the game.
 また、状態情報が示す状態がスキルを使用していない状態である場合、データ取得部11Bは、劣等部分を少なくとも一部に含む出力音データを取得してもよい。一方、スキルを使用している状態である場合、データ取得部11Bは、劣等部分がより少ない又は劣等部分を含まない出力音データを取得してもよい。代替的に、スキルを使用している状態である場合、データ取得部11Bは、演出が施された演出部分がより多い又は演出部分を含む出力音データを取得してもよい。 Furthermore, when the state indicated by the state information is a state in which the skill is not used, the data acquisition unit 11B may acquire output sound data that includes at least a portion of the inferior part. On the other hand, when the skill is being used, the data acquisition unit 11B may acquire output sound data that has less inferior parts or does not include inferior parts. Alternatively, when the skill is being used, the data acquisition unit 11B may acquire output sound data that includes more or more performance parts.
 なお、データ取得部11Bは、育成オブジェクトを含むユニットのそれぞれのゲームオブジェクトの出力音データを取得してもよい。一例として、ライブパートでは育成オブジェクトを含む二人から七人のユニットが歌う場合がある。このとき、育成オブジェクトをユニットのセンターに配置するなどの自動的な選択又はユーザの選択によって、どのゲームオブジェクトが歌のどの部分を歌うのかが変化してもよい。この場合には、各ゲームオブジェクトの一曲の歌全体について、状態に応じた出力音データを生成しておく。そして、データ取得部11Bは、ユニットの全員の状態に応じた出力音データを取得する。 Note that the data acquisition unit 11B may acquire the output sound data of each game object of the unit including the breeding object. As an example, in the live part, a unit of two to seven people including the training object may sing. At this time, which game object sings which part of the song may be changed by automatic selection such as placing the breeding object at the center of the unit or by user selection. In this case, output sound data is generated according to the state of the entire song of each game object. Then, the data acquisition unit 11B acquires output sound data according to the states of all members of the unit.
 さらに、ゲーム進行部11Cは、不必要な部分の音声を出力しないように、ユニットの全員の出力音データに基づく音声を音声出力部16に出力させる。具体的に、ゲーム進行部11Cは、各ゲームオブジェクトに割り当てられた歌のパートの音声のみを音声出力部16に出力させ、その他の割り当てられていない部分の音声は出力させない。これにより、パートが変わる度にユニットの出力音データを用意する必要がなくなる。そのため、出力音データの数を削減でき、データ管理が容易になる。 Further, the game progression unit 11C causes the audio output unit 16 to output audio based on the output audio data of all members of the unit so as not to output unnecessary portions of audio. Specifically, the game progression unit 11C causes the audio output unit 16 to output only the audio of the song part assigned to each game object, and does not output the audio of the other unassigned parts. This eliminates the need to prepare unit output sound data every time a part changes. Therefore, the number of output sound data can be reduced and data management becomes easier.
 代替的に、データ取得部11Bは、ユニットが歌う場合に、育成オブジェクト以外の他のゲームオブジェクトの出力音データを、育成オブジェクトの状態に応じて取得してもよい。例えば、育成オブジェクトが劣った状態である場合、データ取得部11Bは、他のゲームオブジェクトの出力音データについても、劣った状態の出力音データを取得する。 Alternatively, when a unit sings, the data acquisition unit 11B may acquire output sound data of a game object other than the training object, depending on the state of the training object. For example, when the breeding object is in an inferior state, the data acquisition unit 11B also acquires output sound data in an inferior state from the output sound data of other game objects.
 また、データ取得部11Bは、ユニットが歌う場合に、他のゲームオブジェクトの出力音データについては、所定のパラメータが示す状態又は固定の状態(例えば、優れた状態)に応じて取得してもよい。さらに、データ取得部11Bは、ユニットが歌う場合に、他のゲームオブジェクトの出力音データについては、パラメータに関わらず所定の出力音データ(例えば第三音データ)を取得してもよい。 Furthermore, when a unit sings, the data acquisition unit 11B may acquire output sound data of other game objects according to a state indicated by a predetermined parameter or a fixed state (for example, an excellent state). . Furthermore, when a unit sings, the data acquisition unit 11B may acquire predetermined output sound data (for example, third sound data) for output sound data of other game objects regardless of the parameters.
 なお、ユニットに含まれるゲームオブジェクトは、先に育成が終了した育成オブジェクト(例えば継承オブジェクト)であってもよい。この場合、データ取得部11Bは、当該継承オブジェクトのパラメータが示す状態に応じた出力音データを、当該継承オブジェクトの出力音データとして取得してもよい。例えば、継承オブジェクトの歌唱力が所定範囲を超えた優れた状態を示す場合、データ取得部11Bは、継承オブジェクトの出力音データとして、優れた状態に対応する出力音データを取得する。 Note that the game object included in the unit may be a bred object (for example, an inherited object) that has previously been bred. In this case, the data acquisition unit 11B may acquire output sound data according to the state indicated by the parameters of the inherited object as the output sound data of the inherited object. For example, when the singing ability of the inherited object indicates an excellent state exceeding a predetermined range, the data acquisition unit 11B acquires output sound data corresponding to the excellent state as the output sound data of the inherited object.
 また、継承オブジェクトの数が、ユニットの人数に満たないときには、素材オブジェクトがユニットに含まれてもよい。この場合、データ取得部11Bは、当該素材オブジェクトのパラメータが示す状態に応じた出力音データを、当該継承オブジェクトの出力音データとして取得してもよい。例えば、素材オブジェクトの歌唱力が所定範囲を下回る劣った状態を示す場合、データ取得部11Bは、素材オブジェクトの出力音データとして、劣った状態に対応する出力音データを取得する。また、継承オブジェクトの数がユニットの人数に満たない場合、その満たない状態でユニットが形成されてもよい。 Furthermore, when the number of inherited objects is less than the number of people in the unit, the material object may be included in the unit. In this case, the data acquisition unit 11B may acquire output sound data according to the state indicated by the parameters of the material object as the output sound data of the inheritance object. For example, when the material object shows a poor state in which the singing ability falls below a predetermined range, the data acquisition unit 11B obtains output sound data corresponding to the poor state as the output sound data of the material object. Furthermore, if the number of inherited objects is less than the number of people in the unit, the unit may be formed in this state.
 なお、データ取得部11Bは、ユニットにおけるパートの割り当てを反映させたユニット全体の出力音データを、生成部11Dに生成させて取得してもよい。これにより、データ取得部11Bは、ユニットの出力音データを取得すればよく、それぞれのキャラクタの出力音データを取得する必要がない。代替的に、データの容量が問題とならなければ、生成部11Dは、ユニットの出力音データとして、ゲームオブジェクト毎に歌のパートを分けて生成してもよい。 Note that the data acquisition unit 11B may obtain the output sound data of the entire unit that reflects the assignment of parts in the unit by causing the generation unit 11D to generate it. Thereby, the data acquisition unit 11B only needs to acquire the output sound data of the unit, and does not need to acquire the output sound data of each character. Alternatively, if the data capacity is not a problem, the generation unit 11D may generate separate song parts for each game object as the output sound data of the unit.
 [ゲーム進行手段]
 ゲーム進行手段の一例であるゲーム進行部11Cは、ゲームオブジェクトの育成をシミュレーションする。そして、ゲーム進行部11Cは、ゲームの進行に応じて育成オブジェクトのパラメータを変化させる。例えば、ゲーム進行部11Cは、育成パートにおいて歌唱力を上昇させるレッスンが行われた場合に、育成オブジェクトの歌唱力を上昇させるとともに、育成オブジェクトの体力を減少させる。そして、ゲーム進行部11Cは、増減させたパラメータを、育成オブジェクトを一意に識別するオブジェクト識別情報と関連付けて、オブジェクトデータ12Aに含めて端末記憶部12に記憶させる。代替的に、ゲーム進行部11Cは、育成オブジェクトのデータをサーバ記憶部32に記憶させてもよい。
[Game progress method]
The game progress section 11C, which is an example of a game progress means, simulates the growth of game objects. Then, the game progression unit 11C changes the parameters of the breeding object according to the progress of the game. For example, when a lesson for increasing the singing ability is given in the training part, the game progression unit 11C increases the singing ability of the training object and reduces the physical strength of the training object. Then, the game progression unit 11C associates the increased or decreased parameters with object identification information that uniquely identifies the breeding object, includes them in the object data 12A, and stores them in the terminal storage unit 12. Alternatively, the game progression unit 11C may cause the server storage unit 32 to store the data of the breeding object.
 また、ゲーム進行部11Cは、育成パートにおいてダンス力を上昇させるレッスンが行われた場合に、育成オブジェクトのダンス力を上昇させる。なお、ゲーム進行部11Cは、ダンス力に応じた動画をサーバ記憶部32又は端末記憶部12から取得して、端末表示部15に表示させてもよい。この場合、サーバ記憶部32又は端末記憶部12は、ダンス力に応じたダンス動画として、例えば、ダンス力が低くダンスが下手な状態を表す劣等ダンス動画と、ダンス力が高くダンスが上手い状態を表す優等ダンス動画とを記憶している。ゲーム進行部11Cは、このダンス動画をライブ時に表示させる。すなわち、サーバ記憶部32又は端末記憶部12は、育成オブジェクトの育成状態に応じて使用される育成オブジェクトの動作データを記憶している。そして、ゲーム進行部11Cは、所定のタイミングで動作データに基づく演出を表示させる。これにより、ユーザは育成オブジェクトの成長を、視覚を通じて感得できる。なお、動作データはダンス動画に限られず、育成オブジェクトの動きを規定するモーションデータであってもよい。 Furthermore, the game progression unit 11C increases the dance power of the training object when a lesson for increasing the dance power is given in the training part. Note that the game progression unit 11C may acquire a video corresponding to the dancing ability from the server storage unit 32 or the terminal storage unit 12 and display it on the terminal display unit 15. In this case, the server storage unit 32 or the terminal storage unit 12 stores, as dance videos according to dance ability, an inferior dance video showing a state where the dancing ability is low and the dancing is bad, and an inferior dance video showing the state where the dancing ability is high and the dancing is good. Represents honor dance videos and memorizes them. The game progression unit 11C displays this dance video during a live performance. That is, the server storage unit 32 or the terminal storage unit 12 stores operation data of the training object used depending on the training state of the training object. Then, the game progression unit 11C displays an effect based on the motion data at a predetermined timing. Thereby, the user can sense the growth of the nurturing object visually. Note that the motion data is not limited to dance videos, and may be motion data that defines the motion of the training object.
 さらに、ゲーム進行部11Cは、状態情報が劣った状態を示す場合に、苦しい表情又は自信がない表情を表す育成オブジェクトの画像を端末表示部15に表示させてもよい。或いは、ゲーム進行部11Cは、状態情報が優れた状態を示す場合、笑顔又は自信に満ちた表情を表す育成オブジェクトの画像を端末表示部15に表示させてもよい。これらの画像は、サーバ記憶部32又は端末記憶部12が記憶している。そして、ゲーム進行部11Cは、状態に応じた画像をサーバ記憶部32又は端末記憶部12から取得して、端末表示部15に表示させる。例えば、ゲーム進行部11Cは、これらの画像を、ライブ時に表示させる。これにより、状態情報が劣った状態を示す場合、すなわちパラメータが比較的低い状態のときは、苦しい表情又は自信がない表情が比較的多く表示されるが、パラメータが上昇して状態情報が通常の状態又は優れた状態を示すようになると、笑顔又は自信に満ちた表情が比較的多く表示されるようになる。これにより、ユーザは、育成オブジェクトの成長を視覚を通じて感得できる。 Furthermore, when the state information indicates an inferior state, the game progress section 11C may cause the terminal display section 15 to display an image of the training object showing a distressed expression or a lack of confidence. Alternatively, if the status information indicates an excellent status, the game progress unit 11C may cause the terminal display unit 15 to display an image of the training object showing a smiling face or a confident expression. These images are stored in the server storage section 32 or the terminal storage section 12. Then, the game progression unit 11C acquires an image according to the state from the server storage unit 32 or the terminal storage unit 12, and causes the terminal display unit 15 to display the image. For example, the game progression unit 11C displays these images during a live performance. As a result, when the state information indicates an inferior state, that is, when the parameters are relatively low, relatively many expressions of pain or lack of confidence are displayed, but as the parameters increase, the state information becomes normal. When the condition or excellent condition is indicated, smiling or confident expressions are relatively often displayed. Thereby, the user can visually sense the growth of the nurturing object.
 [音声出力制御手段]
 音声出力制御手段の一例としてのゲーム進行部11Cは、データ取得部11Bが取得した出力音データに基づく音声を、音声出力手段の一例である音声出力部16に出力させる。一例として、音声出力部16はスピーカであり、ゲーム端末10と一体的に構成されている。代替的に、音声出力部16は、ゲーム端末10と別体であって、ゲーム端末10と有線又は無線接続されていてもよい。さらに、音声出力部16は、ゲーム端末10と別体である表示装置と一体的に構成されていてもよい。これにより、歌が上達する前は、劣等部分が多い下手な歌が音声出力部16から出力される。そして、歌が上達した後は、劣等部分が少ない又は劣等部分がない上手な歌が音声出力部16から出力される。そのため、ユーザは、育成オブジェクトの成長を、聴覚的に感得できる。これにより、ユーザは、育成オブジェクトが歌う歌によって、育成オブジェクトの育成成果を感得できる。
[Audio output control means]
The game progression unit 11C, which is an example of an audio output control unit, causes the audio output unit 16, which is an example of an audio output unit, to output audio based on the output sound data acquired by the data acquisition unit 11B. As an example, the audio output section 16 is a speaker, and is configured integrally with the game terminal 10. Alternatively, the audio output unit 16 may be separate from the game terminal 10 and connected to the game terminal 10 by wire or wirelessly. Furthermore, the audio output unit 16 may be configured integrally with a display device that is separate from the game terminal 10. As a result, before the singing ability improves, the audio output section 16 outputs a poor song with many inferior parts. After the singing improves, the audio output section 16 outputs a good song with few inferior parts or no inferior parts. Therefore, the user can audibly feel the growth of the nurturing object. Thereby, the user can sense the training results of the training object through the song sung by the training object.
 [サーバ構成]
 図3を参照して、サーバ30の構成について説明する。サーバ30のサーバ制御部31はコンピュータとして構成されており、不図示のプロセッサを有している。このプロセッサは、例えばCPU、又はMPUであり、サーバ記憶部32に記憶されたプログラムに基づいて、サーバ30の全体を制御すると共に、各種処理についても統括的に制御する。代替的に、サーバ制御部31は、CD、DVD、CFカード、及びUSBメモリ等の可搬記録媒体、又は外部記憶媒体に記憶されたプログラムに従って制御を行うこともできる。また、サーバ制御部31には、所定の指令及びデータを入力するキーボード若しくは各種スイッチを含む不図示の操作部が、有線接続又は無線接続されている。さらに、サーバ制御部31には、装置の入力状態、設定状態、計測結果、及び各種情報を表示させる不図示の表示部が、有線接続又は無線接続されている。
[Server configuration]
The configuration of the server 30 will be described with reference to FIG. 3. The server control unit 31 of the server 30 is configured as a computer and includes a processor (not shown). This processor is, for example, a CPU or an MPU, and controls the entire server 30 based on a program stored in the server storage unit 32, and also controls various processes in an integrated manner. Alternatively, the server control unit 31 can also perform control according to a program stored in a portable recording medium such as a CD, DVD, CF card, and USB memory, or in an external storage medium. Further, an operation section (not shown) including a keyboard or various switches for inputting predetermined commands and data is connected to the server control section 31 by wire or wirelessly. Furthermore, a display section (not shown) that displays the input state, setting state, measurement results, and various information of the device is connected to the server control section 31 by wire or wirelessly.
 サーバ記憶部32は、コンピュータ読取可能な非一時的記憶媒体である。具体的に、サーバ記憶部32は、RAM、ROM、HDD、及びSSD等の記憶装置を含む。また、サーバ記憶部32は、サーバ音声データ32Aを記憶している。さらに、サーバ記憶部32は、ゲームの進行に必要な画像データ又は音楽データ等のデータ、及び端末プログラムPGの更新データ等を記憶していてもよい。サーバ通信部33は、通信モジュール又は通信インタフェース等である。そして、サーバ通信部33は、ネットワーク50を介してゲーム端末10とサーバ30とのデータの送受信を可能とする。 The server storage unit 32 is a computer-readable non-transitory storage medium. Specifically, the server storage unit 32 includes storage devices such as RAM, ROM, HDD, and SSD. The server storage unit 32 also stores server audio data 32A. Further, the server storage unit 32 may store data such as image data or music data necessary for progressing the game, update data for the terminal program PG, and the like. The server communication unit 33 is a communication module, a communication interface, or the like. The server communication unit 33 allows data to be transmitted and received between the game terminal 10 and the server 30 via the network 50.
 [出力音データの取得フロー]
 図6を参照して、出力音データの取得について説明する。育成パートにおいて、ゲーム進行部11Cは、育成オブジェクトの育成をシミュレーションする。そして、ゲーム進行部11Cは、ゲームの進行に応じて、育成オブジェクトのパラメータを変化させる(S101)。その後、ライブパートを開始する際に、状態取得部11Aは、育成オブジェクトの状態情報であるパラメータとして、例えば歌唱力を取得する(S102)。また、状態取得部11Aは、取得したパラメータをデータ取得部11Bに受け渡す(S103)。
[Output sound data acquisition flow]
Acquisition of output sound data will be described with reference to FIG. In the training part, the game progression unit 11C simulates the training of the training object. Then, the game progress unit 11C changes the parameters of the breeding object according to the progress of the game (S101). After that, when starting the live part, the status acquisition unit 11A acquires, for example, singing ability as a parameter that is status information of the training object (S102). Further, the status acquisition unit 11A passes the acquired parameters to the data acquisition unit 11B (S103).
 パラメータが示す状態が劣った状態である場合(S104でYES)、データ取得部11Bは、劣等部分が多い出力音データである第一音データを取得する(S105)。ここで、データ取得部11Bは、生成部11Dが生成した第一音データを取得する。そして、ゲーム進行部11Cは、データ取得部11Bが取得した第一音データを出力音データとして、これに基づく音声を音声出力部16に出力させる(S106)。また、パラメータが示す状態が優れた状態である場合(S107でYES)、データ取得部11Bは、劣等部分がない出力音データである第三音データを取得する(S108)。ここで、データ取得部11Bは、生成部11Dが生成した第三音データを取得する。そして、ゲーム進行部11Cは、データ取得部11Bが取得した第三音データを出力音データとして、これに基づく音声を音声出力部16に出力させる(S106)。 If the state indicated by the parameter is an inferior state (YES in S104), the data acquisition unit 11B acquires first sound data, which is output sound data with many inferior parts (S105). Here, the data acquisition unit 11B acquires the first sound data generated by the generation unit 11D. Then, the game progression unit 11C uses the first sound data acquired by the data acquisition unit 11B as output sound data, and causes the audio output unit 16 to output a sound based on this data (S106). Further, when the state indicated by the parameter is an excellent state (YES in S107), the data acquisition unit 11B acquires third sound data, which is output sound data without inferior parts (S108). Here, the data acquisition unit 11B acquires the third sound data generated by the generation unit 11D. Then, the game progression unit 11C uses the third sound data acquired by the data acquisition unit 11B as output sound data, and causes the audio output unit 16 to output a sound based on this data (S106).
 パラメータが示す状態がその他の通常の状態である場合(S107でNO)、データ取得部11Bは、劣等部分が少ない出力音データである第二音データを取得する(S109)。ここで、データ取得部11Bは、生成部11Dが生成した第二音データを取得する。そして、ゲーム進行部11Cは、データ取得部11Bが取得した第二音データを出力音データとして、これに基づく音声を音声出力部16に出力させる(S106)。このようにして、ライブパートにおいては、歌唱力に応じた出力音データに基づく音声が出力される。 If the state indicated by the parameter is another normal state (NO in S107), the data acquisition unit 11B acquires second sound data that is output sound data with fewer inferior parts (S109). Here, the data acquisition unit 11B acquires the second sound data generated by the generation unit 11D. Then, the game progression unit 11C uses the second sound data acquired by the data acquisition unit 11B as output sound data, and causes the audio output unit 16 to output a sound based on the second sound data (S106). In this way, in the live part, audio is output based on output sound data according to the singing ability.
 以上説明した第1実施形態に係るゲームシステム100によれば、パラメータに応じて取得される出力音データに基づく音声を出力させることができる。そのため、ユーザは、育成オブジェクトの育成の成果を聴覚的に感得できる。 According to the game system 100 according to the first embodiment described above, it is possible to output sound based on output sound data acquired according to parameters. Therefore, the user can audibly sense the results of growing the growing object.
 なお、ゲームオブジェクトを育成する要素を含まないゲームであれば、データ取得部11Bは、ゲームオブジェクトのパラメータの値又はゲームの進行度に応じた出力音データを取得してもよい。例えば、データ取得部11Bは、複数種類の出力音データの中から、パラメータの値(例えば体力)又はゲームの進行度に応じた出力音データを特定する。そして、データ取得部11Bは、特定した出力音データを取得する。一例として、端末記憶部12は、出力音データを特定するデータ識別情報とパラメータ又はゲームの進行度とを関連付けたテーブルを記憶している。そして、データ取得部11Bは、当該テーブルを参照して出力音データを特定する。 It should be noted that if the game does not include the element of raising a game object, the data acquisition unit 11B may acquire output sound data according to the value of the parameter of the game object or the progress of the game. For example, the data acquisition unit 11B specifies output sound data according to the value of a parameter (for example, physical strength) or the progress of the game from among a plurality of types of output sound data. The data acquisition unit 11B then acquires the specified output sound data. As an example, the terminal storage unit 12 stores a table that associates data identification information that specifies output sound data with parameters or progress of the game. Then, the data acquisition unit 11B refers to the table and specifies the output sound data.
 また、上述した歌と同様に、データ取得部11Bは、育成オブジェクトがライブで楽器を演奏するタイプのゲームであれば、育成オブジェクトが演奏する曲の音データであって、状態取得部11Aが取得した状態情報が示す状態に応じた出力音データを取得してもよい。そのために、生成部11Dは、状態に応じた曲の出力音データを生成する。そして、ゲーム進行部11Cは、データ取得部11Bが取得した出力音データに基づく音声を音声出力部16に出力させる。これにより、ユーザは、育成オブジェクトによって演奏される曲が、育成オブジェクトの育成によって上達することを感得できる。この場合、端末音声データ12Bは、上述した歌の音データと同様に、演奏に係る曲の各種音データを含んでいる。 Similarly to the song described above, if the game is of the type where the training object plays a musical instrument live, the data acquisition unit 11B will acquire the sound data of the song played by the training object, which the state acquisition unit 11A will acquire. Output sound data may be acquired according to the state indicated by the state information. For this purpose, the generation unit 11D generates output sound data of a song according to the state. The game progression unit 11C then causes the audio output unit 16 to output audio based on the output sound data acquired by the data acquisition unit 11B. Thereby, the user can feel that the songs played by the training object will improve as the training object is trained. In this case, the terminal audio data 12B includes various sound data of the song related to the performance, similar to the sound data of the song described above.
 例えば、劣った状態に対応する曲の出力音データの劣等部分は、元となる曲の対応部分において、音高、出力タイミング、又は音の大きさ等が異なっている。また、優れた状態に対応する曲の出力音データは、演奏テクニックを反映する演出が施された演出部分を少なくとも一部に含む。例えば、当該出力音データは、ギターの早弾き等の演奏テクニックによる演出が施された演出部分を含む。なお、生成部11Dは、曲の基準音データ等を取得するとともに、取得した基準音データ等に基づいて、出力音データを生成してもよい。さらに、データ取得部11Bは、状態に応じた出力音データとして、育成オブジェクトが歌唱する歌の音データと、育成オブジェクトが演奏する曲の音データとを取得してもよい。そして、ゲーム進行部11Cは、歌と曲の出力音データに基づく音声を音声出力部16に出力させてもよい。 For example, an inferior part of the output sound data of a song corresponding to an inferior state differs from the corresponding part of the original song in pitch, output timing, or loudness. Furthermore, the output sound data of the music piece corresponding to the excellent condition includes at least a part of the performance portion that has been performed to reflect the performance technique. For example, the output sound data includes a performance portion that is performed using a performance technique such as rapid playing of a guitar. Note that the generation unit 11D may acquire reference sound data of a song and generate output sound data based on the acquired reference sound data and the like. Furthermore, the data acquisition unit 11B may acquire sound data of a song sung by the breeding object and sound data of a song played by the breeding object as output sound data according to the state. Then, the game progression unit 11C may cause the audio output unit 16 to output audio based on the output sound data of the song and the song.
 [第2実施形態]
 図7を参照して第2実施形態について説明する。第2実施形態は、サーバ230が状態取得部211Aと生成部211Dとを備える点において、第1実施形態と異なる。なお、第2実施形態の説明においては、第1実施形態との相違点について説明し、既に説明した構成要素については同じ参照番号を付し、その説明を省略する。特に説明した場合を除き、同じ参照符号を付した構成要素は略同一の動作及び機能を奏し、その作用効果も略同一である。
[Second embodiment]
A second embodiment will be described with reference to FIG. The second embodiment differs from the first embodiment in that the server 230 includes a status acquisition unit 211A and a generation unit 211D. In addition, in the description of the second embodiment, differences from the first embodiment will be described, and the same reference numerals will be given to the components that have already been described, and the description thereof will be omitted. Unless otherwise specified, components with the same reference numerals have substantially the same operations and functions, and their effects are also substantially the same.
 サーバ230のサーバ記憶部232は、ゲームプログラムの一例であるサーバプログラムPG2を記憶している。そして、サーバプログラムPG2は、コンピュータとしてのサーバ制御部231を、状態取得部211A及び生成部211Dとして機能させる。この状態取得部211Aは、育成対象となるゲームオブジェクトである育成オブジェクトの状態を示す状態情報をゲーム端末210から取得する。 The server storage unit 232 of the server 230 stores a server program PG2, which is an example of a game program. Then, the server program PG2 causes the server control section 231 as a computer to function as a state acquisition section 211A and a generation section 211D. The status acquisition unit 211A acquires status information indicating the status of a training object, which is a game object to be trained, from the game terminal 210.
 ゲーム端末210のデータ取得部11Bは、出力音データをサーバ230に要求するとともに、状態情報としてのパラメータをサーバ230に送信する。なお、ライブ中に変動するパラメータに応じた出力音データを生成する場合、データ取得部11Bは、ライブを開始する際にパラメータを送信する他、パラメータが変動する度にパラメータをサーバ230に送信する。そして、状態取得部211Aは、受領した状態情報を生成部211Dに受け渡す。また、生成部211Dは、状態取得部211Aが取得した状態情報に示す状態に応じた出力音データを生成する。 The data acquisition unit 11B of the game terminal 210 requests the server 230 for output sound data, and also transmits parameters as status information to the server 230. Note that when generating output sound data according to parameters that change during a live performance, the data acquisition unit 11B not only transmits the parameters when starting the live performance, but also transmits the parameters to the server 230 every time the parameters change. . The status acquisition unit 211A then passes the received status information to the generation unit 211D. Furthermore, the generation unit 211D generates output sound data according to the state indicated by the status information acquired by the status acquisition unit 211A.
 例えば、生成部211Dは、状態取得部211Aが取得した育成オブジェクトの状態情報(例えばパラメータ)によって示される状態に応じた出力音データを生成する。そのために、サーバ記憶部32は、生成用データと楽譜データと音声創作ソフトとを記憶している。一例として、生成部211Dは、ライブ開始前及びライブ中においてリアルタイムに、生成用データと楽譜データとを用いて出力音データを生成する。そして、サーバ制御部231は、生成された出力音データをストリーミング配信の態様でゲーム端末210に送信する。ゲーム端末210のデータ取得部11Bは、送信された出力音データを取得することによって、状態に応じた出力音データを取得する。そして、ゲーム進行部11Cが音声出力部16に出力音データに基づく音声を出力させる。 For example, the generation unit 211D generates output sound data according to the state indicated by the state information (for example, parameters) of the breeding object acquired by the state acquisition unit 211A. For this purpose, the server storage unit 32 stores generation data, musical score data, and audio creation software. As an example, the generation unit 211D generates output sound data using generation data and musical score data in real time before and during the live performance. Then, the server control unit 231 transmits the generated output sound data to the game terminal 210 in a streaming distribution manner. The data acquisition unit 11B of the game terminal 210 acquires output sound data according to the state by acquiring the transmitted output sound data. Then, the game progress section 11C causes the audio output section 16 to output audio based on the output sound data.
 これにより、サーバ230からダウンロードするデータ量を削減できる。また、パラメータに応じて劣等部分の割合を細かく変えることができるため、習熟度の変化の表現度を高めることができる。さらに、ライブ中に、ユーザの操作によって育成オブジェクトのパラメータが変動しても、変動したパラメータに応じて劣等部分の割合を変えた出力音データを生成できる。加えて、多種類の出力音データを生成することなく、必要な割合の劣等部分を含む出力音データを生成できる。 Thereby, the amount of data downloaded from the server 230 can be reduced. Furthermore, since the proportion of inferior parts can be finely changed according to the parameters, the degree of expression of changes in proficiency level can be improved. Furthermore, even if the parameters of the training object change due to user operations during a live performance, output sound data can be generated with the proportion of inferior parts changed in accordance with the changed parameters. In addition, output sound data including a necessary proportion of inferior parts can be generated without generating many types of output sound data.
 生成部211Dは、ライブ中においてリアルタイムに生成せず、ライブ開始前(例えば状態取得部211Aが状態情報を取得したタイミング)で、生成用データを用いて出力音データを生成してもよい。この場合、生成部211Dは、生成した出力音データをサーバ音声データ32Aに含めて、サーバ記憶部32に記憶させる。そして、サーバ制御部231は、データ取得部11Bからのダウンロード要求に応じて、生成された出力音データをゲーム端末210に送信する。これにより、サーバ230からダウンロードするデータ量を削減できる。また、パラメータに応じて劣等部分の割合を細かく変えることができるため、習熟度の変化の表現度を高めることができる。 The generation unit 211D may generate the output sound data using the generation data before the start of the live performance (for example, at the timing when the status acquisition unit 211A acquires the status information), instead of generating the output sound data in real time during the live performance. In this case, the generation unit 211D includes the generated output sound data in the server audio data 32A and stores it in the server storage unit 32. Then, the server control unit 231 transmits the generated output sound data to the game terminal 210 in response to a download request from the data acquisition unit 11B. Thereby, the amount of data downloaded from the server 230 can be reduced. Furthermore, since the proportion of inferior parts can be finely changed according to the parameters, the degree of expression of changes in proficiency level can be improved.
 なお、生成部211Dは、劣等部分の割合を、パラメータに応じて連続的又は段階的に増減させてもよい。また、パラメータに応じた劣等部の割合が規定されたテーブルにしたがって、劣等部分の割合が定められてもよい。 Note that the generation unit 211D may increase or decrease the proportion of the inferior portion continuously or stepwise according to the parameter. Further, the proportion of the inferior part may be determined according to a table in which the proportion of the inferior part is defined according to the parameter.
 他の例として、生成部211Dは、歌の基準音データに基づいて、ライブ開始前に出力音データを生成してもよい。具体的に、生成部211Dは、劣等部分を少なくとも一部に含む一の音データの一例である劣等音データと、一の音データと比較して劣等部分の割合が少ないか、又は劣等部分を含まない少なくとも一つの他の音データの一例である基準音データとをミキシングすることによって、出力音データを生成する。これにより、ミキシングによって劣等部分を含む出力音データを生成できるので、生成処理の負荷を低減できる。なお、生成部211Dは、他の音データとして、さらに優等音データをミキシングに用いることができる。 As another example, the generation unit 211D may generate the output sound data before the start of the live performance based on the reference sound data of the song. Specifically, the generation unit 211D generates inferior sound data, which is an example of the first sound data that includes an inferior part in at least a part, and the second sound data, which has a smaller proportion of the inferior part compared to the first sound data, or has an inferior part. Output sound data is generated by mixing with reference sound data, which is an example of at least one other sound data that is not included. This allows output sound data including inferior parts to be generated through mixing, thereby reducing the load on the generation process. Note that the generation unit 211D can further use superior sound data for mixing as other sound data.
 この場合、生成部211Dは、基準音データ、劣等音データ、及び優等音データを、生成用データと楽譜データとを用いて予め生成している。代替的に、生成部211Dは、出力音データを生成する際に基準音データ、劣等音データ、及び優等音データを生成してもよい。また、生成部211Dは、状態取得部211Aがゲーム端末210から取得したパラメータに応じて、劣等音データ、基準音データ、及び優等音データのそれぞれの使用割合を増減する。さらに、これらのデータの使用割合は、予めパラメータ毎に定められていてもよい。 In this case, the generation unit 211D generates the reference tone data, inferior tone data, and superior tone data in advance using the generation data and musical score data. Alternatively, the generation unit 211D may generate reference sound data, inferior sound data, and superior sound data when generating the output sound data. Furthermore, the generation unit 211D increases or decreases the usage ratio of each of the inferior sound data, the reference sound data, and the superior sound data according to the parameters that the state acquisition unit 211A acquires from the game terminal 210. Furthermore, the usage ratio of these data may be determined in advance for each parameter.
 また、生成部211Dは、ライブ開始前及びライブ中においてリアルタイムに出力音データを生成してもよい。このとき、サーバ制御部231は、生成された出力音データをストリーミング配信の態様でゲーム端末210に送信する。さらに、生成部211Dは、状態取得部211Aが状態情報を取得したタイミングで出力音データを生成してもよい。このとき、サーバ制御部231は、データ取得部11Bからのダウンロード要求に応じて、生成された出力音データをゲーム端末210に送信する。 Additionally, the generation unit 211D may generate output sound data in real time before and during the live performance. At this time, the server control unit 231 transmits the generated output sound data to the game terminal 210 in a streaming distribution manner. Furthermore, the generation unit 211D may generate the output sound data at the timing when the status acquisition unit 211A acquires the status information. At this time, the server control unit 231 transmits the generated output sound data to the game terminal 210 in response to the download request from the data acquisition unit 11B.
 代替的に、生成部211Dは、育成オブジェクトの状態情報に応じて劣等部分の劣化の程度が段階的に異なるように複数種類の出力音データを生成してもよい。すなわち、生成部211Dは、育成オブジェクトの状態情報に応じて、複数のパターン(例えば3パターン)の出力音データを予め生成してもよい。そして、生成部211Dは、生成した複数種類の音データをサーバ音声データ32Aに含めて、サーバ記憶部32に記憶させる。例えば、生成部211Dは、歌唱力が所定範囲に含まれる通常の状態に対応する出力音データと、歌唱力が所定範囲を超える優れた状態に対応する出力音データと、歌唱力が所定範囲を下回る劣った状態に対応する出力音データを予め生成する。 Alternatively, the generation unit 211D may generate multiple types of output sound data such that the degree of deterioration of the inferior part varies in stages according to the state information of the breeding object. That is, the generation unit 211D may generate a plurality of patterns (for example, three patterns) of output sound data in advance according to the state information of the breeding object. Then, the generation unit 211D includes the generated plural types of sound data in the server audio data 32A, and stores the server audio data 32A in the server storage unit 32. For example, the generation unit 211D generates output sound data corresponding to a normal state in which the singing ability is within a predetermined range, output sound data corresponding to an excellent state in which the singing ability exceeds the predetermined range, and output sound data corresponding to an excellent state in which the singing ability exceeds the predetermined range. Output sound data corresponding to an inferior state is generated in advance.
 この場合、ゲーム端末210のデータ取得部11Bは、状態取得手段としても機能して、育成オブジェクトの状態を示す状態情報を取得する。そして、データ取得部11Bは、育成オブジェクトの状態に応じた出力音データのダウンロード要求をサーバ230へ送信する。データ取得部11Bは、要求した出力音データをサーバ230から取得することによって、状態に応じた出力音データを取得する。これにより、ゲーム端末210の端末記憶部12が記憶するデータのデータ量を削減できる。また、ゲーム端末210において出力音データを生成しないため、ゲーム端末210の処理の負荷を削減できる。 In this case, the data acquisition unit 11B of the game terminal 210 also functions as a status acquisition unit and acquires status information indicating the status of the breeding object. Then, the data acquisition unit 11B transmits to the server 230 a download request for output sound data according to the state of the training object. The data acquisition unit 11B acquires output sound data according to the state by acquiring the requested output sound data from the server 230. Thereby, the amount of data stored in the terminal storage section 12 of the game terminal 210 can be reduced. Furthermore, since the game terminal 210 does not generate output sound data, the processing load on the game terminal 210 can be reduced.
 一例として、データ取得部11Bは、パラメータに応じて特定した出力音データのダウンロードをサーバ230に要求する。具体的に、データ取得部11Bは、歌唱力が所定範囲に含まれる場合、通常の状態に対応する出力音データ(例えば第二音データ)を要求する。また、データ取得部11Bは、歌唱力が所定範囲を超える場合、優れた状態に対応する出力音データ(例えば第三音データ)を要求する。さらに、データ取得部11Bは、歌唱力が所定範囲を下回る場合、劣った状態に対応する出力音データ(例えば第一音データ)を要求する。例えば、端末記憶部12は、出力音データを特定するデータ識別情報とパラメータ又は状態とを関連付けたテーブルを記憶している。そして、データ取得部11Bは、当該テーブルを参照して出力音データを特定し、特定したデータ識別情報の出力音データをサーバ230に送信する。 As an example, the data acquisition unit 11B requests the server 230 to download the output sound data specified according to the parameters. Specifically, when the singing ability is within a predetermined range, the data acquisition unit 11B requests output sound data (for example, second sound data) corresponding to a normal state. Further, when the singing ability exceeds a predetermined range, the data acquisition unit 11B requests output sound data (for example, third sound data) corresponding to an excellent state. Furthermore, when the singing ability falls below a predetermined range, the data acquisition unit 11B requests output sound data (for example, first sound data) corresponding to the inferior state. For example, the terminal storage unit 12 stores a table that associates data identification information that specifies output sound data with parameters or states. Then, the data acquisition unit 11B specifies the output sound data with reference to the table, and transmits the output sound data of the specified data identification information to the server 230.
 また、サーバ記憶部232は、複数種類の出力音データを、状態情報又は状態情報によって特定される状態と関連づけて記憶していてもよい。例えば、サーバ記憶部232は、状態情報としてのパラメータと関連づけて出力音データを記憶していてもよい。これにより、状態取得部211Aが取得したパラメータによって、出力音データが特定されゲーム端末210へ送信できる。また、サーバ記憶部232は、劣った状態、通常の状態、及び優れた状態と関連づけて出力音データを記憶していてもよい。これにより、状態取得部211Aが取得したパラメータが示す状態によって、出力音データが特定されゲーム端末210へ送信できる。なお、出力音データの送信は、ストリーミング配信の態様で行われてもよい。ストリーミング配信の場合、要求に応じた出力音データの少なくとも一部が端末音声データ12Bに含まれて、端末記憶部12に記憶される。 Additionally, the server storage unit 232 may store multiple types of output sound data in association with state information or a state specified by the state information. For example, the server storage unit 232 may store output sound data in association with parameters as state information. Thereby, output sound data can be specified and transmitted to the game terminal 210 based on the parameters acquired by the state acquisition unit 211A. Further, the server storage unit 232 may store output sound data in association with a poor state, a normal state, and an excellent state. Thereby, output sound data can be specified and transmitted to the game terminal 210 according to the state indicated by the parameter acquired by the state acquisition unit 211A. Note that the output sound data may be transmitted in the form of streaming distribution. In the case of streaming distribution, at least a part of the output sound data according to the request is included in the terminal audio data 12B and stored in the terminal storage unit 12.
 さらに、データ取得部11Bは、育成オブジェクトの全ての状態に対応する出力音データのダウンロードをサーバ230に要求してもよい。この場合、育成オブジェクトの出力音データは、端末記憶部12に記憶される。そして、データ取得部11Bは、ダウンロードした出力音データの中から、育成オブジェクトの状態に応じて必要な出力音データを選択して取得する。例えば、歌唱力が所定範囲を下回る場合、データ取得部11Bは、劣った状態に対応する出力音データ(例えば第一音データ)を選択して取得する。 Further, the data acquisition unit 11B may request the server 230 to download output sound data corresponding to all states of the breeding object. In this case, the output sound data of the breeding object is stored in the terminal storage section 12. Then, the data acquisition unit 11B selects and acquires necessary output sound data from the downloaded output sound data according to the state of the breeding object. For example, when the singing ability is below a predetermined range, the data acquisition unit 11B selects and acquires output sound data (for example, first sound data) corresponding to the inferior state.
 さらに他の例として、サーバ制御部231は、状態取得部211Aがゲーム端末210から取得した状態情報が示す状態に応じた出力音データを選択してゲーム端末210へ送信してもよい。具体的に、サーバ制御部231は、歌唱力が所定範囲に含まれる場合、通常の状態に対応する出力音データ(例えば第二音データ)を選択する。また、サーバ制御部231は、歌唱力が所定範囲を超える場合、優れた状態に対応する出力音データ(例えば第三音データ)を選択する。さらに、サーバ制御部231は、歌唱力が所定範囲を下回る場合、劣った状態に対応する出力音データ(例えば第一音データ)を選択する。一例として、サーバ記憶部32は、出力音データを特定するデータ識別情報とパラメータ又は状態とを関連付けたテーブルを記憶している。そして、サーバ制御部231は、当該テーブルを参照して出力音データを選択する。 As yet another example, the server control unit 231 may select output sound data according to the state indicated by the state information acquired from the game terminal 210 by the state acquisition unit 211A, and transmit it to the game terminal 210. Specifically, when the singing ability is within a predetermined range, the server control unit 231 selects output sound data (for example, second sound data) corresponding to a normal state. Further, when the singing ability exceeds a predetermined range, the server control unit 231 selects output sound data (for example, third sound data) corresponding to an excellent state. Furthermore, when the singing ability falls below a predetermined range, the server control unit 231 selects output sound data (for example, first sound data) corresponding to the inferior state. As an example, the server storage unit 32 stores a table that associates data identification information that specifies output sound data with parameters or states. Then, the server control unit 231 refers to the table and selects output sound data.
 この場合、データ取得部11Bは、状態情報(例えばパラメータ)をサーバ230に送信する。そして、サーバ制御部231は、状態取得部211Aがゲーム端末210から取得した状態情報が示す状態に応じた出力音データを選択してゲーム端末210へ送信する。データ取得部11Bは、サーバ制御部231が選択した出力音データをサーバ230から取得することによって、状態に応じた出力音データを取得する。なお、サーバ記憶部232は、複数種類の出力音データを、状態情報又は状態情報によって特定される状態と関連づけて記憶している。これにより、サーバ制御部231は、状態情報又は状態によって、出力音データを選択してゲーム端末210へ送信できる。 In this case, the data acquisition unit 11B transmits status information (for example, parameters) to the server 230. Then, the server control unit 231 selects output sound data according to the state indicated by the state information acquired from the game terminal 210 by the state acquisition unit 211A, and transmits the selected output sound data to the game terminal 210. The data acquisition unit 11B acquires the output sound data selected by the server control unit 231 from the server 230, thereby acquiring output sound data according to the state. Note that the server storage unit 232 stores a plurality of types of output sound data in association with state information or a state specified by the state information. Thereby, the server control unit 231 can select output sound data and transmit it to the game terminal 210 based on the state information or state.
 以上説明した第2実施形態に係るゲームシステム200によれば、パラメータに応じて取得される出力音データに基づく音声を出力させることができる。そのため、ユーザは、育成オブジェクトの育成の成果を聴覚的に感得できる。 According to the game system 200 according to the second embodiment described above, it is possible to output sound based on output sound data acquired according to parameters. Therefore, the user can audibly sense the results of growing the growing object.
 [音データ等のまとめ]
 ここで、第1実施形態及び第2実施形態における音データ等、及び音データ等の生成についてまとめる。生成用データは、演者の歌唱又は朗読などから得られる元データを学習用データとした、機械学習によって生成される。また、楽譜データは、サーバ30の管理者、又はゲーム端末10のユーザにより作成されるか、又は自動的に生成される。出力音データは、生成用データと楽譜データとから直接的に生成されてもよい。また、出力音データは、生成用データと楽譜データとから生成された劣等音データ、基準音データ、及び優等音データとを適宜ミキシングすることにより生成されてもよい。ミキシングする条件は上述のとおりである。さらに、出力音データは、劣等音データ、基準音データ、及び優等音データに基づいて生成された第一音データ、第二音データ、及び第三音データのような複数の音データから所定条件に基づいて選択されてもよい。さらに、劣等音データ、基準音データ、及び優等音データのそれぞれを、そのまま第一音データ、第二音データ、及び第三音データとして用いてもよい。
 また、楽譜データは、固定的であってもよいが、オブジェクトの状態を示す状態情報に基づいて適宜変更されてもよい。すなわち、ゲームの進行に伴い変化する状態情報が所定のタイミングで取得され、取得された状態情報に基づいて、楽譜データに劣等部分として歌唱すべき箇所の指示、又は優等部分として歌唱すべき箇所の指示といった変更が加えられる。そして、変更後の楽譜データと生成用データとに基づいて、出力音データが生成される。或いは、変更後の楽譜データと生成用データとに基づいて、劣等音データ、基準音データ、及び優等音データが生成され、これらの音データのうちから出力音データとして出力するデータが選択されてもよい。また、変更後の楽譜データと生成用データとに基づいて、中間音データとしての劣等音データ、基準音データ、及び優等音データが生成されてもよい。この場合、中間音データを適宜ミキシングすることにより、出力音データが生成されてもよい。
 また、各音データ等は、サーバ30にて生成されてもよいし、ゲーム端末10にて生成されてもよい。出力音データがサーバ30にて生成される場合、所定のタイミングで出力音データはゲーム端末10にダウンロードされる。この場合、楽譜データは少なくともサーバ30が保持する。出力音データがゲーム端末10にて生成される場合、中間音データまたは生成用データはサーバ30にて生成され、適宜のタイミングでゲーム端末10にダウンロードされる。この場合、楽譜データは、ゲーム端末10もしくはサーバ30が保持するか、またはゲーム端末10及びサーバ30が保持する。出力音データ及び中間音データがゲーム端末10にて生成される場合、生成用データは適宜のタイミングでゲーム端末10にダウンロードされる。この場合、楽譜データはゲーム端末10が少なくとも保持する。
 また、楽譜データに対する変更は、サーバ30にて行われてもよいし、ゲーム端末10にて行われてもよい。
 また、出力音データは、ライブ実行中に順次生成されて再生されてもよい。また、出力音データは、ライブ実行直前に選択或いは生成され、ライブ実行中に再生されてもよい。または、セクションの途中の所定のタイミングにおける状態情報が参照されて、出力音データの生成が開始され、ライブ実行時までに生成を終えてもよい。この生成がサーバ30で行われる場合、出力音データのゲーム端末10へのダウンロードが、ユーザ操作等によってなされるゲームの進行と並行してライブ実行時までに行われてもよい。さらに、出力音データとして複数の音データが予めサーバ30で保持されていてもよい。この場合、セクションの途中の所定のタイミングにおける状態情報を参照して、当該複数の音データから出力音データとして使用する音データが決定され、ユーザ操作等によってなされるゲームの進行と並行して、決定された音データがゲーム端末10へダウンロードされてもよい。これらのようにすることで、ダウンロード処理によってユーザを待たせる時間を少なくできる。
 以上、各実施形態を参照して本発明について説明したが、本発明は上記実施形態に限定されるものではない。本発明に反しない範囲で変更された発明、及び本発明と均等な発明も本発明に含まれる。また、各実施形態及び各変形形態、並びに各実施形態又は各変形形態に含まれる技術的手段は、本発明に反しない範囲で適宜組み合わせることができる。
[Summary of sound data, etc.]
Here, sound data, etc., and generation of sound data, etc. in the first embodiment and the second embodiment will be summarized. The generation data is generated by machine learning using original data obtained from a performer's singing or reading as learning data. Moreover, the musical score data is created by the administrator of the server 30 or the user of the game terminal 10, or is automatically generated. The output sound data may be directly generated from the generation data and the musical score data. Further, the output sound data may be generated by appropriately mixing inferior tone data, reference tone data, and superior tone data generated from the generation data and musical score data. The conditions for mixing are as described above. Furthermore, the output sound data is obtained from a plurality of sound data such as first sound data, second sound data, and third sound data generated based on the inferior sound data, reference sound data, and superior sound data under predetermined conditions. may be selected based on. Furthermore, each of the inferior tone data, reference tone data, and superior tone data may be used as they are as the first tone data, the second tone data, and the third tone data.
Further, the musical score data may be fixed, but may be changed as appropriate based on state information indicating the state of the object. In other words, status information that changes as the game progresses is acquired at a predetermined timing, and based on the acquired status information, instructions for the part to be sung as the inferior part or the part to be sung as the superior part are provided in the musical score data. Changes such as instructions may be made. Then, output sound data is generated based on the changed musical score data and the generation data. Alternatively, inferior tone data, reference tone data, and superior tone data are generated based on the changed musical score data and generation data, and data to be output as output sound data is selected from among these tone data. Good too. Furthermore, inferior tone data, reference tone data, and superior tone data as intermediate tone data may be generated based on the changed musical score data and generation data. In this case, the output sound data may be generated by appropriately mixing the intermediate sound data.
Further, each sound data and the like may be generated by the server 30 or by the game terminal 10. When the output sound data is generated by the server 30, the output sound data is downloaded to the game terminal 10 at a predetermined timing. In this case, at least the server 30 holds the musical score data. When output sound data is generated by the game terminal 10, intermediate sound data or generation data is generated by the server 30 and downloaded to the game terminal 10 at an appropriate timing. In this case, the musical score data is held by the game terminal 10 or the server 30, or held by the game terminal 10 and the server 30. When the output sound data and the intermediate sound data are generated by the game terminal 10, the generation data is downloaded to the game terminal 10 at an appropriate timing. In this case, the game terminal 10 at least holds the musical score data.
Furthermore, changes to the musical score data may be made at the server 30 or at the game terminal 10.
Further, the output sound data may be sequentially generated and played back during a live performance. Further, the output sound data may be selected or generated immediately before the live performance, and may be played back during the live performance. Alternatively, the generation of the output sound data may be started by referring to the status information at a predetermined timing in the middle of the section, and the generation may be completed by the time of the live performance. When this generation is performed by the server 30, the download of the output sound data to the game terminal 10 may be performed in parallel with the progress of the game performed by user operations, etc., by the time of live execution. Furthermore, a plurality of sound data may be held in advance in the server 30 as output sound data. In this case, the sound data to be used as output sound data is determined from the plurality of sound data by referring to the state information at a predetermined timing in the middle of the section, and in parallel with the progress of the game performed by the user's operation etc. The determined sound data may be downloaded to the game terminal 10. By doing these things, it is possible to reduce the amount of time the user is kept waiting for the download process.
Although the present invention has been described above with reference to each embodiment, the present invention is not limited to the above embodiments. The present invention includes inventions modified within the scope of the present invention and inventions equivalent to the present invention. Further, each embodiment, each modification, and the technical means included in each embodiment or each modification can be combined as appropriate within the scope of the present invention.
 例えば、状態取得部11Aと、データ取得部11Bと、生成部11Dとが、サーバ30とゲーム端末10とに分けて設けられていてもよい。一例として、状態を判断する状態取得部11Aがゲーム端末10に設けられ、生成部11Dがサーバ30に設けられてもよい。この場合、サーバ制御部31と端末制御部11とが協働してコンピュータとして機能する。 For example, the state acquisition unit 11A, the data acquisition unit 11B, and the generation unit 11D may be provided separately for the server 30 and the game terminal 10. As an example, the game terminal 10 may be provided with the state acquisition section 11A that determines the state, and the generation section 11D may be provided on the server 30. In this case, the server control section 31 and the terminal control section 11 cooperate to function as a computer.
 また、データ取得部11Bが、出力音データを生成する生成手段として機能してもよい。 さらに、生成手段は、ゲームシステム100,200の外部に設けられていてもよい。この場合、生成部11D,211Dは省略できる。また、生成用データ、基準音データ、劣等音データ、優等音データ、及び出力音データの少なくとも一つは、予め又は必要に応じて端末記憶部12又はサーバ記憶部32,232に記憶される。さらに、生成部11D,211Dは、ゲームシステム100,200の外部から、生成用データを取得して、基準音データ、劣等音データ、優等音データ、又は及び出力音データを生成してもよい。また、生成部11D,211Dは、ゲームシステム100,200の外部から、基準音データ、劣等音データ、及び優等音データの少なくとも一つを取得して、出力音データを生成してもよい。 Furthermore, the data acquisition unit 11B may function as a generation unit that generates output sound data. Furthermore, the generating means may be provided outside the game system 100, 200. In this case, the generation units 11D and 211D can be omitted. Further, at least one of the generation data, reference sound data, inferior sound data, superior sound data, and output sound data is stored in the terminal storage unit 12 or the server storage unit 32, 232 in advance or as necessary. Furthermore, the generation units 11D and 211D may obtain generation data from outside the game system 100 and 200 to generate reference sound data, inferior sound data, superior sound data, or output sound data. Furthermore, the generation units 11D and 211D may obtain at least one of reference sound data, inferior sound data, and superior sound data from outside the game system 100 and 200, and generate the output sound data.
 また、出力音データは、人間が歌った歌を録音して生成されるデータであってもよい。また、基準音データ、劣等音データ及び優等音データも、人間が歌った歌を録音して生成されるデータであってもよい。これらの場合、出力音データは、予め又は必要に応じて端末記憶部12又はサーバ記憶部32,232に記憶される。 Additionally, the output sound data may be data generated by recording a song sung by a human. Further, the reference tone data, inferior tone data, and superior tone data may also be data generated by recording a song sung by a human. In these cases, the output sound data is stored in the terminal storage unit 12 or the server storage unit 32, 232 in advance or as needed.
 また、育成パートで行われる育成方法は、育成オブジェクトを育成できるものであればよく、上述したものに限られない。例えば、育成オブジェクトを含む複数のオブジェクトがフィールドを歩くことにより敵と遭遇して戦うタイプの育成方法であってもよいし、抽選で当選して獲得したカード等を合成することにより育成オブジェクトのパラメータを上昇させるタイプの育成方法であってもよい。或いは、リズムに合わせて画面内を移動する指標が所定の到達点に到達するタイミングでユーザが操作を行う、いわゆるタイミングゲームを行うことで育成を行うタイプの育成方法であってもよい。 Further, the training method performed in the training part is not limited to the method described above, as long as it can grow the training object. For example, it may be a training method in which multiple objects including the training object walk around the field and encounter enemies and fight, or a training method may be used in which a number of objects, including the training object, encounter and fight enemies, or the training object's parameters may be set by combining cards etc. obtained by winning a lottery. It may also be a type of cultivation method that increases the. Alternatively, a type of training method may be used in which the user performs a so-called timing game, in which the user performs an operation at the timing when an indicator that moves within the screen in accordance with the rhythm reaches a predetermined point.
 また、生成部11D,211Dは、基準音データに基づいて、出力音データ、劣等音データ又は優等音データを生成してもよい。具体的に、生成部11D,211Dは、基準音データに少なくとも一部が劣等部分となるよう変更を加えて劣等音データを生成してもよく、基準音データに少なくとも一部が演出部分となるよう変更を加えて優等音データを生成してもよい。また、生成部11D,211Dは、基準音データに少なくとも一部が劣等部分又は演出部分となるよう変更を適宜加えて、例えば、出力音データの候補として、第一乃至第三音データを生成してもよい。 Furthermore, the generation units 11D and 211D may generate output sound data, inferior sound data, or superior sound data based on the reference sound data. Specifically, the generation units 11D and 211D may generate inferior sound data by modifying the reference sound data so that at least a part thereof becomes an inferior part, or the reference sound data at least a part of which becomes a production part. Honor tone data may be generated by making such changes. In addition, the generation units 11D and 211D appropriately modify the reference sound data so that at least a part thereof becomes an inferior part or a production part, and generate, for example, first to third sound data as candidates for output sound data. You can.
 以下、上述した各実施形態及び各変形例から導き出される各種の態様を付記する。なお、各態様の理解を容易にするため、添付図面に図示された参照符号を記載する。ただし、参照符号は、本発明を図示の形態に限定する意図で記載するものではない。 Hereinafter, various aspects derived from each embodiment and each modification example described above will be added. Note that in order to facilitate understanding of each aspect, reference numerals shown in the accompanying drawings will be described. However, the reference numerals are not intended to limit the invention to the illustrated form.
 (付記1)
 育成対象となるゲームオブジェクトの育成をシミュレーションして、前記ゲームオブジェクトによって曲の演奏又は歌の歌唱がなされる演出が行われ、演奏される前記曲又は歌唱される前記歌を出力するゲームを提供するゲームシステム100,200であって、
 前記ゲームオブジェクトの状態を示す状態情報を取得する状態取得手段11A,211Aと、
 前記曲又は前記歌の音データであって、前記状態情報が示す前記状態に応じた出力音データを取得するデータ取得手段11B,211Bと、
 取得した前記出力音データに基づく音声を出力させる音声出力制御手段11Cとを備える、ゲームシステム100,200。
(Additional note 1)
To provide a game in which the raising of a game object to be raised is simulated, an effect is performed in which the game object plays a song or sings a song, and the played song or the sung song is output. A game system 100,200,
state acquisition means 11A, 211A for acquiring state information indicating the state of the game object;
Data acquisition means 11B, 211B that acquires output sound data of the song or the song according to the state indicated by the state information;
A game system 100, 200 comprising a sound output control means 11C that outputs sound based on the acquired output sound data.
 (付記2)
 前記出力音データは、前記状態情報が示す前記状態に応じた劣等部分を少なくとも一部に含む、付記1に記載のゲームシステム100,200。
(Additional note 2)
The game system 100, 200 according to supplementary note 1, wherein the output sound data includes at least a part of the inferior part according to the state indicated by the state information.
 (付記3)
 前記データ取得手段11B,211Bは、前記状態情報が劣った状態を示す場合には、前記状態情報が優れた状態を示す場合と比較して、前記劣等部分の割合が多い前記出力音データを取得する、付記2に記載のゲームシステム100,200。
(Additional note 3)
The data acquisition means 11B, 211B acquires the output sound data in which the proportion of the inferior part is higher when the status information indicates an inferior status, compared to when the status information indicates an excellent status. The game system 100, 200 according to appendix 2.
 (付記4)
 前記データ取得手段は、互いに異なる複数種類の音データの中から、前記状態情報に応じた前記出力音データを選択して取得する、付記1から3のいずれか一項に記載のゲームシステム100,200。
(Additional note 4)
The game system 100 according to any one of Supplementary Notes 1 to 3, wherein the data acquisition means selects and acquires the output sound data according to the state information from among a plurality of different types of sound data. 200.
 (付記5)
 前記状態情報は、パラメータであり、
 前記ゲームオブジェクトの育成をシミュレーションして、前記ゲームの進行に応じて前記パラメータを変化させるゲーム進行手段11Cをさらに備える、付記1から4のいずれか一項に記載のゲームシステム100,200。
(Appendix 5)
The state information is a parameter,
The game system 100, 200 according to any one of Supplementary Notes 1 to 4, further comprising a game progressing means 11C that simulates the growth of the game object and changes the parameters according to the progress of the game.
 (付記6)
 前記データ取得手段11B,211Bは、互いに異なる複数種類の音データの中から、前記パラメータに応じた前記出力音データを選択して取得する、付記5に記載のゲームシステム100,200。
(Appendix 6)
The game system 100, 200 according to appendix 5, wherein the data acquisition means 11B, 211B selects and acquires the output sound data according to the parameters from among a plurality of different types of sound data.
 (付記7)
 前記パラメータは、前記ゲームが進行すると高くなるか、又は前記ゲームオブジェクトの前記状態が悪化すると低くなる、付記5又は6に記載のゲームシステム100,200。
(Appendix 7)
The game system 100, 200 according to appendix 5 or 6, wherein the parameter increases as the game progresses or decreases as the state of the game object deteriorates.
 (付記8)
 前記劣等部分を少なくとも一部に含む一の音データと、前記一の音データと比較して前記劣等部分の割合が少ないか、又は前記劣等部分を含まない少なくとも一つの他の音データとをミキシングすることによって、前記出力音データを生成する生成手段11D,211Dをさらに備える、付記2に記載のゲームシステム100,200。
(Appendix 8)
Mixing one sound data that includes at least a part of the inferior part with at least one other sound data that has a smaller proportion of the inferior part compared to the one sound data or does not include the inferior part. The game system 100, 200 according to supplementary note 2, further comprising generation means 11D, 211D that generates the output sound data by doing so.
 (付記9)
 前記劣等部分は、前記他の音データと比較して音の出力タイミング又は音高が異なる、付記8に記載のゲームシステム100,200。
(Appendix 9)
The game system 100, 200 according to appendix 8, wherein the inferior portion has a different sound output timing or pitch than the other sound data.
 (付記10)
 前記他の音データは、演奏テクニック又は歌唱テクニックを反映する演出が施された演出部分を少なくとも一部に含む、付記8又は9に記載のゲームシステム100,200。
(Appendix 10)
The game system 100, 200 according to appendix 8 or 9, wherein the other sound data includes at least a part of a performance portion that is performed to reflect a performance technique or a singing technique.
 (付記11)
 前記曲又は前記歌の基準音データに基づいて、前記出力音データを生成する生成手段11D,211Dをさらに備える、付記1から3のいずれか一項に記載のゲームシステム100,200。
(Appendix 11)
The game system 100, 200 according to any one of Supplementary Notes 1 to 3, further comprising generation means 11D, 211D that generates the output sound data based on reference sound data of the song or the song.
 (付記12)
 コンピュータ11,231を備えるとともに、育成対象となるゲームオブジェクトの育成をシミュレーションして、前記ゲームオブジェクトによって曲の演奏又は歌の歌唱がなされる演出が行われ、演奏される前記曲又は歌唱される前記歌を出力するゲームを提供するゲームシステム100,200のゲームプログラムPG,PG2であって、
 前記コンピュータ11,231を、
 前記ゲームオブジェクトの状態を示す状態情報を取得する状態取得手段11A,211Aと、
 前記曲又は前記歌の音データであって、前記状態情報が示す前記状態に応じた出力音データを取得するデータ取得手段11B,211Bと、
 取得した前記出力音データに基づく音声を出力させる音声出力制御手段11Cとして機能させる、ゲームプログラムPG,PG2。
(Appendix 12)
The computer 11, 231 is provided, and the training of a game object to be raised is simulated, and an effect is performed in which the game object performs a song or sings a song. Game programs PG and PG2 of game systems 100 and 200 that provide games that output songs,
The computer 11, 231,
state acquisition means 11A, 211A for acquiring state information indicating the state of the game object;
Data acquisition means 11B, 211B that acquires output sound data of the song or the song according to the state indicated by the state information;
Game programs PG and PG2 function as an audio output control means 11C that outputs audio based on the acquired output sound data.
 (付記13)
 コンピュータ11,231を備えるとともに、育成対象となるゲームオブジェクトの育成をシミュレーションして、前記ゲームオブジェクトによって曲の演奏又は歌の歌唱がなされる演出が行われ、演奏される前記曲又は歌唱される前記歌を出力するゲームを提供するゲームシステム100,200の制御方法であって、
 前記コンピュータ11,231に、
 前記ゲームオブジェクトの状態を示す状態情報を取得させ、
 前記曲又は前記歌の音データであって、前記状態情報が示す前記状態に応じた出力音データを取得させ、
 取得した前記出力音データに基づく音声を出力させる、制御方法。
(Appendix 13)
The computer 11, 231 is provided, and the training of a game object to be raised is simulated, and an effect is performed in which the game object performs a song or sings a song. A method of controlling a game system 100, 200 that provides a game that outputs a song, the method comprising:
The computer 11, 231,
obtain state information indicating the state of the game object;
Obtaining output sound data of the song or the song that corresponds to the state indicated by the state information;
A control method for outputting sound based on the acquired output sound data.
 付記1から3に記載のゲームシステム100,200、付記12に記載のゲームプログラムPG,PG2、又は付記13に記載の制御方法によれば、育成した結果に応じた音声が出力されるため、ユーザは、育成オブジェクトの育成の成果を聴覚的に感得できる。また、一回の育成パートにおいてかかる音声が出力される機会が複数回設けられることにより、ユーザは、演奏される曲又は歌唱される歌が、育成によって上達することを感得できる。また、付記4及び6に記載のゲームシステム100,200によれば、都度出力音データを生成する処理を省略でき、処理の負担を削減できる。また、付記5から7に記載のゲームシステム100,200によれば、状態情報としてのパラメータによって状態を判断できる。これによって、育成の結果としてのパラメータの増減を出力音データの上手さに反映できる。 According to the game systems 100 and 200 described in appendices 1 to 3, the game programs PG and PG2 described in appendix 12, or the control method described in appendix 13, a sound corresponding to the training result is output, so that the user allows you to audibly sense the results of growing a growing object. Further, by providing multiple opportunities for outputting such audio in one training part, the user can feel that the music being played or the song being sung improves through training. Furthermore, according to the game systems 100 and 200 described in Supplementary Notes 4 and 6, the process of generating output sound data each time can be omitted, and the processing load can be reduced. Furthermore, according to the game systems 100 and 200 described in Supplementary Notes 5 to 7, the state can be determined based on parameters as state information. This allows changes in parameters as a result of training to be reflected in the quality of the output sound data.
 付記8から10に記載のゲームシステム100,200によれば、多数の出力音データを生成することなく、ミキシングによって劣等部分を含む出力音データを生成できる。また、付記11に記載のゲームシステム100,200によれば、多数の出力音データを生成することなく、基準音データに劣等部分を含めて出力音データを生成できる。 According to the game systems 100 and 200 described in Appendixes 8 to 10, output sound data including inferior parts can be generated by mixing without generating a large number of output sound data. Furthermore, according to the game systems 100 and 200 described in Appendix 11, output sound data can be generated by including inferior portions in the reference sound data without generating a large number of output sound data.
11  :端末制御部(コンピュータ)
11A :状態取得部(状態取得手段)
11B :データ取得部(データ取得手段)
11C :ゲーム進行部(音声出力制御手段)
11D :生成部(生成手段)
16  :音声出力部(音声出力手段)
100 :ゲームシステム
200 :ゲームシステム
211A:状態取得部(状態取得手段)
211B:データ取得部(データ取得手段)
211D:生成部(生成手段)
231 :サーバ制御部(コンピュータ)
PG  :端末プログラム(ゲームプログラム)
PG2 :サーバプログラム(ゲームプログラム)
11: Terminal control unit (computer)
11A: Status acquisition unit (status acquisition means)
11B: Data acquisition unit (data acquisition means)
11C: Game progress section (audio output control means)
11D: Generation unit (generation means)
16: Audio output section (sound output means)
100: Game system 200: Game system 211A: State acquisition unit (state acquisition means)
211B: Data acquisition unit (data acquisition means)
211D: Generation unit (generation means)
231: Server control unit (computer)
PG: Terminal program (game program)
PG2: Server program (game program)

Claims (13)

  1.  育成対象となるゲームオブジェクトの育成をシミュレーションして、前記ゲームオブジェクトによって曲の演奏又は歌の歌唱がなされる演出が行われ、演奏される前記曲又は歌唱される前記歌を出力するゲームを提供するゲームシステムであって、
     前記ゲームオブジェクトの状態を示す状態情報を取得する状態取得手段と、
     前記曲又は前記歌の音データであって、前記状態情報が示す前記状態に応じた出力音データを取得するデータ取得手段と、
     取得した前記出力音データに基づく音声を出力させる音声出力制御手段とを備える、ゲームシステム。
    To provide a game in which the raising of a game object to be raised is simulated, an effect is performed in which the game object plays a song or sings a song, and the played song or the sung song is output. A game system,
    a state acquisition means for acquiring state information indicating the state of the game object;
    a data acquisition means for acquiring output sound data of the song or the song, which corresponds to the state indicated by the state information;
    A game system comprising: an audio output control means for outputting audio based on the acquired output sound data.
  2.  前記出力音データは、前記状態情報が示す前記状態に応じた劣等部分を少なくとも一部に含む、請求項1に記載のゲームシステム。 The game system according to claim 1, wherein the output sound data includes at least a part of the inferior part according to the state indicated by the state information.
  3.  前記データ取得手段は、前記状態情報が劣った状態を示す場合には、前記状態情報が優れた状態を示す場合と比較して、前記劣等部分の割合が多い前記出力音データを取得する、請求項2に記載のゲームシステム。 The data acquisition unit acquires the output sound data having a higher proportion of the inferior portion when the state information indicates an inferior state than when the state information indicates an excellent state. The game system according to item 2.
  4.  前記データ取得手段は、互いに異なる複数種類の音データの中から、前記状態情報に応じた前記出力音データを選択して取得する、請求項2に記載のゲームシステム。 The game system according to claim 2, wherein the data acquisition means selects and acquires the output sound data according to the state information from among a plurality of different types of sound data.
  5.  前記状態情報は、パラメータであり、
     前記ゲームオブジェクトの育成をシミュレーションして、前記ゲームの進行に応じて前記パラメータを変化させるゲーム進行手段をさらに備える、請求項1から4のいずれか一項に記載のゲームシステム。
    The state information is a parameter,
    The game system according to any one of claims 1 to 4, further comprising game progress means for simulating the growth of the game object and changing the parameters according to the progress of the game.
  6.  前記データ取得手段は、互いに異なる複数種類の音データの中から、前記パラメータに応じた前記出力音データを選択して取得する、請求項5に記載のゲームシステム。 6. The game system according to claim 5, wherein the data acquisition means selects and acquires the output sound data according to the parameter from among a plurality of different types of sound data.
  7.  前記パラメータは、前記ゲームが進行すると高くなるか、又は前記ゲームオブジェクトの前記状態が悪化すると低くなる、請求項5に記載のゲームシステム。 The game system according to claim 5, wherein the parameter increases as the game progresses or decreases as the state of the game object deteriorates.
  8.  前記劣等部分を少なくとも一部に含む一の音データと、前記一の音データと比較して前記劣等部分の割合が少ないか、又は前記劣等部分を含まない少なくとも一つの他の音データとをミキシングすることによって、前記出力音データを生成する生成手段をさらに備える、請求項2に記載のゲームシステム。 Mixing one sound data that includes at least a part of the inferior part with at least one other sound data that has a smaller proportion of the inferior part compared to the one sound data or does not include the inferior part. The game system according to claim 2, further comprising a generating means for generating the output sound data by doing so.
  9.  前記劣等部分は、前記他の音データと比較して音の出力タイミング又は音高が異なる、請求項8に記載のゲームシステム。 The game system according to claim 8, wherein the inferior portion has a different sound output timing or pitch than the other sound data.
  10.  前記他の音データは、演奏テクニック又は歌唱テクニックを反映する演出が施された演出部分を少なくとも一部に含む、請求項8に記載のゲームシステム。 9. The game system according to claim 8, wherein the other sound data includes at least a part of a performance portion that is performed to reflect a performance technique or a singing technique.
  11.  前記曲又は前記歌の基準音データに基づいて、前記出力音データを生成する生成手段をさらに備える、請求項2に記載のゲームシステム。 The game system according to claim 2, further comprising a generation unit that generates the output sound data based on reference sound data of the song or the song.
  12.  コンピュータを備えるとともに、育成対象となるゲームオブジェクトの育成をシミュレーションして、前記ゲームオブジェクトによって曲の演奏又は歌の歌唱がなされる演出が行われ、演奏される前記曲又は歌唱される前記歌を出力するゲームを提供するゲームシステムのゲームプログラムであって、
     前記コンピュータを、
     前記ゲームオブジェクトの状態を示す状態情報を取得する状態取得手段と、
     前記曲又は前記歌の音データであって、前記状態情報が示す前記状態に応じた出力音データを取得するデータ取得手段と、
     取得した前記出力音データに基づく音声を出力させる音声出力制御手段として機能させる、ゲームプログラム。
    The computer includes a computer, simulates the training of a game object to be trained, performs an effect in which the game object plays a song or sings a song, and outputs the song to be played or the song to be sung. A game program for a game system that provides a game that
    The computer,
    a state acquisition means for acquiring state information indicating the state of the game object;
    a data acquisition means for acquiring output sound data of the song or the song, which corresponds to the state indicated by the state information;
    A game program that functions as an audio output control means for outputting audio based on the acquired output sound data.
  13.  コンピュータを備えるとともに、育成対象となるゲームオブジェクトの育成をシミュレーションして、前記ゲームオブジェクトによって曲の演奏又は歌の歌唱がなされる演出が行われ、演奏される前記曲又は歌唱される前記歌を出力するゲームを提供するゲームシステムの制御方法であって、
     前記コンピュータに、
     前記ゲームオブジェクトの状態を示す状態情報を取得させ、
     前記曲又は前記歌の音データであって、前記状態情報が示す前記状態に応じた出力音データを取得させ、
     取得した前記出力音データに基づく音声を出力させる、制御方法。
    The computer includes a computer, simulates the training of a game object to be trained, performs an effect in which the game object plays a song or sings a song, and outputs the song to be played or the song to be sung. A method for controlling a game system that provides a game that
    to the computer;
    obtain state information indicating the state of the game object;
    Obtaining output sound data of the song or the song that corresponds to the state indicated by the state information;
    A control method for outputting sound based on the acquired output sound data.
PCT/JP2023/028307 2022-08-04 2023-08-02 Game system, and game program and control method for game system WO2024029572A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022124850 2022-08-04
JP2022-124850 2022-08-04

Publications (1)

Publication Number Publication Date
WO2024029572A1 true WO2024029572A1 (en) 2024-02-08

Family

ID=89849438

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/028307 WO2024029572A1 (en) 2022-08-04 2023-08-02 Game system, and game program and control method for game system

Country Status (1)

Country Link
WO (1) WO2024029572A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003242289A (en) * 2002-02-20 2003-08-29 Sony Corp Device and method and program for contents processing
JP2005081011A (en) * 2003-09-10 2005-03-31 Namco Ltd Game system, program, and information storage medium
JP2010004898A (en) * 2008-06-24 2010-01-14 Daito Giken:Kk Game machine, electronic apparatus and sound output method of game machine
JP2013195699A (en) * 2012-03-19 2013-09-30 Yamaha Corp Singing synthesis device and singing synthesis program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003242289A (en) * 2002-02-20 2003-08-29 Sony Corp Device and method and program for contents processing
JP2005081011A (en) * 2003-09-10 2005-03-31 Namco Ltd Game system, program, and information storage medium
JP2010004898A (en) * 2008-06-24 2010-01-14 Daito Giken:Kk Game machine, electronic apparatus and sound output method of game machine
JP2013195699A (en) * 2012-03-19 2013-09-30 Yamaha Corp Singing synthesis device and singing synthesis program

Similar Documents

Publication Publication Date Title
KR102654029B1 (en) Apparatus, systems, and methods for music production
JP5094091B2 (en) Game system
US20020005109A1 (en) Dynamically adjustable network enabled method for playing along with music
US6878869B2 (en) Audio signal outputting method and BGM generation method
Summers The Legend of Zelda: Ocarina of Time: A Game Music Companion
Wang Game Design for Expressive Mobile Music.
Mice et al. Super size me: Interface size, identity and embodiment in digital musical instrument design
Herzfeld Atmospheres at play: Aesthetical considerations of game music
JPH11468A (en) Information storage medium and game machine
JP3746875B2 (en) Information storage medium and game device
CN103191561B (en) The music game board of tool spectrum face preview and result phase and method thereof
WO2024029572A1 (en) Game system, and game program and control method for game system
Enns Game scoring: Towards a broader theory
JP3863545B2 (en) Information storage medium and game device
JP6752465B1 (en) Computer programs and game systems
JP2008145928A (en) Versus karaoke system
Schoenberg The NPR curious listener's guide to jazz
JP5750234B2 (en) Sound output device, sound output program
JP2009237345A (en) Karaoke game system, karaoke device and program
TWI487553B (en) A music game machine and a method thereof with a spectral preview and a result state
WO2024024813A1 (en) Game system, and game program and control method for game system
JP2000330576A (en) Method and apparatus for evaluating singing with karaoke sing-along
Kamp Ludic music in video games
Andersson Exploring new interaction possibilities for video game music scores using sample-based granular synthesis
Studley Exploring Real-Time Music Composition through Competitive Gameplay Interactions

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23850126

Country of ref document: EP

Kind code of ref document: A1