WO2014142288A1 - Song editing device and song editing system - Google Patents

Song editing device and song editing system Download PDF

Info

Publication number
WO2014142288A1
WO2014142288A1 PCT/JP2014/056806 JP2014056806W WO2014142288A1 WO 2014142288 A1 WO2014142288 A1 WO 2014142288A1 JP 2014056806 W JP2014056806 W JP 2014056806W WO 2014142288 A1 WO2014142288 A1 WO 2014142288A1
Authority
WO
WIPO (PCT)
Prior art keywords
music
editing
song
data
order
Prior art date
Application number
PCT/JP2014/056806
Other languages
French (fr)
Japanese (ja)
Inventor
紀行 畑
公爾 恩田
Original Assignee
ヤマハ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ヤマハ株式会社 filed Critical ヤマハ株式会社
Publication of WO2014142288A1 publication Critical patent/WO2014142288A1/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/361Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems
    • G10H1/365Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems the accompaniment information being stored on a host computer and transmitted to a reproducing terminal by means of a network, e.g. public telephone lines
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/121Musical libraries, i.e. musical databases indexed by musical parameters, wavetables, indexing schemes using musical parameters, musical rule bases or knowledge bases, e.g. for automatic composing methods
    • G10H2240/131Library retrieval, i.e. searching a database or selecting a specific musical piece, segment, pattern, rule or parameter set

Definitions

  • the present invention relates to a music editing apparatus and music editing system for editing music.
  • Patent Document 1 proposes a karaoke apparatus that performs a performance of all reserved music pieces within the remaining time by extracting a part of the reserved music pieces and performing partly.
  • an object of the present invention is to provide a music editing apparatus capable of editing a plurality of selected music pieces in order of increasing in the remaining time.
  • the music editing apparatus of the present invention arranges the order of the selected music corresponding to the selected music data received by the receiving means based on the receiving means for receiving selection of a plurality of music data and the priority of each music data.
  • an editing means is provided for editing each piece of music data in accordance with a predetermined time limit.
  • the order of songs is rearranged based on the priority of each song, and editing is performed so that the music playback ends within the remaining time (predetermined time limit).
  • the song data can be edited in the order in which it rises. For example, because high-profile songs and classic songs are exciting, if you set high priority to them and rearrange them so that songs with higher priority are arranged in the second half, the remaining time decreases. It will be exciting.
  • the priority may include information corresponding to the reproduction history of each piece of music data.
  • Information according to the playback history includes, for example, annual ranking, monthly ranking, weekly ranking, regional ranking, gender ranking, age ranking, etc., and it is desirable to set a higher priority for songs with higher ranking. Alternatively, it is desirable that a song with a high ranking in the past be set with a high priority as a classic song.
  • Each piece of music data includes information indicating a constituent section (prelude, chorus, interlude, etc.), and the editing means sets an extraction order for each constituent section in accordance with the order of the selected music that has been rearranged. For example, since the chorus is the most prominent component section, the chorus component section is extracted first, and then the chorus part including the chorus component section is extracted, and then editing is performed.
  • the playback time of the music after extraction slightly exceeds the remaining time
  • the playback time may be shortened by increasing the tempo, for example.
  • the playback time of the music after extraction is significantly shorter than the remaining time, it is possible to insert a music other than the selected music (for example, a standard music).
  • a music other than the selected music for example, a standard music.
  • the user information includes a singing history.
  • the singing history for example, the number of visits to the store, the scoring result of each song, and the like.
  • a singer with a high scoring result swells when sung in the second half, it is desirable that a song sung by a singer with a high scoring result be arranged in the second half.
  • the music editing apparatus according to each aspect described above is realized by hardware (electronic circuit) such as a DSP (Digital Signal Processor) dedicated to music data processing, and a general-purpose operation such as a CPU (Central Processing Unit). This is also realized by cooperation between the processing device and the program.
  • DSP Digital Signal Processor
  • the program according to the present invention includes a reception process for receiving selection of a plurality of music data, and a selected music corresponding to the selected music data received by the receiving means based on the priority of each music data. And the editing process for editing each piece of music data in accordance with a predetermined time limit.
  • the program according to the above aspect can be provided in a form stored in a computer-readable recording medium and installed in the computer.
  • the recording medium is, for example, a non-transitory recording medium, and an optical recording medium (optical disk) such as a CD-ROM is a good example, but a known arbitrary one such as a semiconductor recording medium or a magnetic recording medium This type of recording medium can be included.
  • the program of the present invention can be provided in the form of distribution via a communication network and installed in a computer.
  • the selection of a plurality of music data is accepted, the order of the selected music corresponding to the selected music data is rearranged based on the priority of each music data, A music editing method for editing each music data according to the time limit is provided.
  • FIG. 11 is a flowchart showing the overall operation of medley generation and performance.
  • FIG. 12 is a block diagram showing the configuration of the portable audio player.
  • FIG. 13 is a block diagram showing a music editing apparatus which is an embodiment of the present invention.
  • FIG. 14 is a flowchart showing a music editing method which is an embodiment of the present invention.
  • FIG. 1 is a diagram showing a configuration of a karaoke system according to the present embodiment.
  • the karaoke system includes a center (server) 1 connected via a network 2 such as the Internet and a plurality of karaoke stores 3.
  • Each karaoke store 3 is provided with a host 5 connected to the network 2 and a plurality of karaoke devices 7 connected to the network 2 via the host 5.
  • the host 5 also functions as a relay device such as a router. However, the repeater may be provided as a separate device.
  • the host 5 is installed in a management room of a karaoke store.
  • a plurality of karaoke apparatuses 7 are installed in each private room (karaoke box).
  • Each karaoke device 7 is provided with a remote controller 9.
  • FIG. 2 is a block diagram showing the configuration of the karaoke apparatus.
  • the karaoke apparatus 7 includes a CPU 11 that controls the operation of the entire apparatus, and various components connected to the CPU 11.
  • the CPU 11 includes a RAM 12, an HDD 13, a network interface (I / F) 14, an LCD (touch panel) 15, an A / D converter 17, a sound source 18, a mixer (effector) 19, a decoder 22 such as an MPEG, a display processing unit 23, an operation
  • the unit 25 and the transmission / reception unit 26 are connected.
  • the HDD 13 stores an operation program for the CPU 11.
  • the RAM 12 which is a work memory
  • an area to be read for executing the operation program of the CPU 11 an area for reading music data to play karaoke music, a reservation list, and the like are set.
  • the HDD 13 stores music data for playing karaoke music, video data for displaying a background video on the monitor 24, and the like.
  • Video data stores both moving images and still images. Music data and video data are periodically distributed from the center 1 and updated.
  • the HDD 13 constructs a database in which various pieces of information (singer name, song name, genre, song composition) are stored in association with song numbers for each song data (FIG. 3B and FIG. 3). See C)).
  • the database is updated as various information is distributed from the center 1.
  • the CPU 11 is a control unit that comprehensively controls the karaoke apparatus and functionally incorporates a sequencer to perform karaoke performance. Further, the CPU 11 performs an audio signal generation process, a video signal generation process, and a medley generation process.
  • the touch panel 15 and the operation unit 25 are provided on the front surface of the karaoke apparatus.
  • the CPU 11 displays an image corresponding to the operation information on the touch panel 15 based on the operation information input from the touch panel 15 to realize a GUI.
  • the remote controller 9 also realizes the same GUI.
  • the CPU 11 performs various operations based on operation information input from the remote controller 9 via the touch panel 15, the operation unit 25, or the transmission / reception unit 26. For example, when the user accepts selection (reproduction reservation) of music data using the touch panel 15, the operation unit 25, or the remote controller 9, the music corresponding to the music data selected in the reservation list in the RAM 12 is reserved. That is, the touch panel 15, the operation unit 25, or the transmission / reception unit 26 corresponds to the reception unit of the present invention as shown in FIG.
  • the CPU 11 When the user gives an instruction to generate a medley using the touch panel 15, the operation unit 25, or the remote controller 9, the CPU 11 performs a medley generation process. That is, the CPU 11 corresponds to the editing means of the present invention as shown in FIG.
  • the music editing apparatus of the present invention is realized by the CPU 11, the RAM 12, the HDD 13, the remote controller 9, and the like.
  • the medley generation process will be described later. After accepting the selection (reproduction reservation) of the music (music data), the performance order of the reserved music is changed, a part of each music is extracted, and the new music is connected smoothly. This is a process for generating a single piece of music.
  • the CPU 11 functionally includes a sequencer.
  • the CPU 11 reads music data corresponding to the music number of the reserved music registered in the reserved list in the RAM 12 from the HDD 13, and performs a karaoke performance with the sequencer.
  • the music data includes a header in which music numbers are written, a musical sound track in which performance MIDI data is written, and a guide melody track in which MIDI data for guide melody is written. , A lyrics track in which lyric MIDI data is written, a back chorus playback timing, a chorus track in which audio data to be played back is written, and the like. Note that the format of the music data is not limited to this example.
  • the sequencer controls the sound source 18 based on the data of the musical tone track and the guide melody track, and generates the musical tone of the karaoke song.
  • the sequencer also reproduces the back chorus audio data (compressed audio data such as MP3 attached to the music data) at the timing designated by the chorus track.
  • the sequencer synthesizes the character pattern of the lyrics in synchronism with the progress of the song based on the lyrics track, converts the character pattern into a video signal, and inputs it to the display processing unit 23.
  • the sound source 18 forms a musical sound signal (digital audio signal) according to data (note event data) input from the CPU 11 by processing of the sequencer.
  • the formed tone signal is input to the mixer 19.
  • the mixer 19 gives effects such as echo to the musical sound signal generated by the sound source 18, the chorus sound, and the singing voice signal of the singer input from the microphone 16 via the A / D converter 17. Mix the signal.
  • Each mixed digital audio signal is input to the sound system (SS) 20.
  • the sound system 20 incorporates a D / A converter and a power amplifier, converts an input digital signal into an analog signal, amplifies it, and emits sound from the speaker 21.
  • the effect that the mixer 19 gives to each audio signal and the balance of mixing are controlled by the CPU 11.
  • the CPU 11 reads the video data stored in the HDD 13 and reproduces the background video and the like in synchronism with the generation of musical sounds and the generation of the lyrics telop by the sequencer.
  • the video data of the moving image is encoded in the MPEG format.
  • the CPU 11 inputs the read video data to the decoder 22.
  • the decoder 22 converts the input data such as MPEG into a video signal and inputs it to the display processing unit 23.
  • the text processing pattern of the lyrics telop is input to the display processing unit 23.
  • the display processing unit 23 synthesizes a lyrics telop or the like on the video signal of the background video by using the OSD and outputs it to the monitor 24.
  • the monitor 24 displays the video signal input from the display processing unit 23.
  • karaoke performance is performed.
  • the CPU 11 counts up the number of performances of the music performed in the database at the start or end of performance of the music, and stores the performance history as log information (history) in the HDD 13.
  • the log information is periodically transmitted to the center 1, and the log information of the karaoke apparatuses 7 nationwide is tabulated as ranking information (see FIG. 5) at the center 1.
  • the medley generation process is performed.
  • the CPU 11 reads the song numbers of the reserved songs registered in the reserved list in the RAM 12, rearranges the order of the songs based on the priority of each song, and performs within the remaining time (predetermined time limit) in which the karaoke system can be used.
  • the medley is generated by editing so as to end. The priority of each song corresponds to the excitement in karaoke.
  • FIG. 4 is a diagram showing analysis data generated based on the priority of each song and user information. This figure shows an example in which six songs are registered in the reservation list and three users sing. As shown in FIG. 4, each piece of music has a score set for each item as a priority according to the reproduction history (performance history).
  • the performance history includes annual rankings, monthly rankings, weekly rankings, regional rankings, gender rankings, age rankings, etc., and higher scores are set for songs with higher rankings.
  • FIG. 5 is a diagram showing ranking information.
  • the ranking information is periodically downloaded from the center 1 and stored in the HDD 13. Alternatively, the ranking information may be downloaded by inquiring of the center 1 at the timing of generating the analysis data (the HDD 13 realizes the singing history storage means of the present invention).
  • Each musical piece has a score set according to the number registered within the 10th place and the number registered within the 100th place. For example, as shown in FIG. 2.
  • FIG. 5 is a diagram showing ranking information.
  • the ranking information is periodically downloaded from the center 1 and stored in the HDD 13.
  • the ranking information may be downloaded by inquiring of the center 1 at the timing of generating the analysis data (the HDD 13 realizes the singing history storage means of the
  • a score based on user information is set in the analysis data. As shown in FIG. 4, “singer”, “other history”, “95 points or more”, and “90 points or more” are set as score items based on user information. Further, “age”, “organization” and “visit” are set for the score of “singer”.
  • FIG. 6 is a diagram showing user information.
  • User information is stored in the center 1.
  • the CPU 11 downloads user information from the center 1 and stores it in the RAM 12 (thereby, the RAM 12 implements the user information storage means of the present invention).
  • the user information includes information such as each user's ID, name, date of birth, gender, family, school of origin, affiliation, title, singing history, favorites, and the like.
  • the CPU 11 generates analysis data based on these user information. For example, as shown in FIG. 4, among the users “... A3”, the user “... B1”, and the user “. In this example, 10 points are set for the oldest user “... B1”, and 5 points are set for the next older user “... CC”.
  • a high score (30 points) is set as the “organization” for the user “... If the school of origin (for example, elementary school) is the same for all users, a high score may be set for the oldest person (same score as age may be set). Or if all the users are the same family, you may set a high score to any one person (for example, father). Further, the CPU 11 assigns points according to the number of visits registered in the “singing history”. For example, since the user “... A3” has visited the store 12 times, 12 points are set. The CPU 11 sums up the scores of these “age”, “organization”, and “visit” to set the score of the singer.
  • a score based on each user's past scoring results is set in the analysis data. For example, the user “... Since 1 “song number: 124504” has a scoring result of 95 points or more, 10 points are set to “95 points or more” and 10 points are set to “90 points or more”.
  • the “genre” column is blank, but the “genre” column is assigned only when the age groups of all users are close. For example, if everyone is in their 20s, 10 points are set for the J-POP and rock genres.
  • the CPU 11 When the CPU 11 receives an instruction to generate a medley, the CPU 11 generates the analysis data as described above and totals the scores set for each piece of music. Then, the CPU 11 changes the performance order of the reserved songs based on the total score. It is considered that the song with the higher total score is more exciting. It is desirable that the order of the music is in the order of the lower total score so that the remaining time decreases, the higher the score. However, it is desirable to rearrange the music so that the music with the second highest score is at the top, because the first music is to be excited. For example, in the example of FIG. No. 2 (101 points) is the first song and No. with the lowest score. No. 4 (22 points) is the second song. 1 (42 points), No. 1 3 (78 points), no. 6 (88 points), No. with the highest score. 5 (111 points) is the last song.
  • the order of the songs is not limited to this example. For example, the same genre songs should not be continued, the same users should not be continued, the same gender users should not be continued, etc. Also good.
  • the song with the lowest total score may be arranged at the end, and the song with the next lowest total score may be arranged at the top.
  • the CPU 11 extracts a part of the rearranged music pieces and edits so that all reserved songs can be played within the remaining time. First, the CPU 11 determines the extraction order for each performance section for each piece of music.
  • FIG. 7A, 7B, and 7C are diagrams showing the order of extraction. The extraction order differs depending on whether each song is the first song, the last song, or another song.
  • FIG. 7A is a diagram showing the extraction order of the first song. In the first song, the chorus of the first chorus is first in the extraction order, and the first chorus including the chorus is second in the extraction order.
  • FIG. 7B is a diagram showing the extraction order of the final song. In the final song, chorus including the chorus is ranked first in the order of extraction, and chorus including the chorus is ranked second in the order of extraction. In the first song, the prelude is ranked third in the order of extraction, and the latter is not extracted (deleted), whereas in the last song, the prelude is ranked third in the order of extraction. Do not extract.
  • FIG. 7C shows the extraction order of other songs. Other songs are in the same extraction order as the first song, but the prelude and the follower are not extracted.
  • the CPU 11 extracts the performance sections (composition sections) of each song in the order of extraction as described above, and edits so that all reserved songs can be played within the remaining time.
  • the start time, the end time, and the section length are preset for each performance section in each musical piece.
  • the remaining time may be manually set by the user using the touch panel 15, the operation unit 25, or the remote control 9, for example, or may be set by the store in accordance with the scheduled use time when starting to use the karaoke system.
  • the CPU 11 first extracts the performance section that is ranked first in the extraction order of each song, and determines whether it falls within the remaining time. If the CPU 11 determines that it falls within the remaining time, it extracts the second performance section in the extraction order of each song. Thereafter, the performance section extraction process is repeated according to the extraction order. However, with respect to the performance section of the second or lower extraction order, not all songs but one song is extracted according to the order according to the total score, and it is determined whether or not it falls within the remaining time.
  • the order according to the total score may be the order with the highest total score, but as shown in FIG. 8, the users are arranged in the order of the total score for each user, and further arranged in the order of the highest score of the singer. May be. For example, in FIG.
  • the order according to the total score is No. . 5 ⁇ No. 1 ⁇ No. 6 ⁇ No. 2 ⁇ No. 4 ⁇ No. 3 Therefore, first, no.
  • the performance section of the second extraction order is extracted for five songs. No. Since the music No. 5 is the final music as described above, a chorus performance section including a large chorus is extracted. Here, if the CPU 11 determines that the time is still within the remaining time, the next No.
  • the second performance section is extracted for one piece of music. No. Since the second piece of music is the first piece as described above, the performance section of the first chorus is extracted. In this way, each musical piece is gradually extracted from the important performance section, and the extraction ends when the remaining time is exceeded.
  • the remaining time is greatly exceeded when the first performance section (rust) in the extraction order is extracted, a specific part is extracted from the rust.
  • the specific portion can be extracted when each piece of music data is set in advance as a database as shown in FIG. Further, even when a specific portion is extracted, if the remaining time is greatly exceeded, it may be deleted (not played) from the music with low priority. On the other hand, if all performance sections are extracted for all the music pieces and are significantly shorter than the remaining time, the same performance section may be used repeatedly or a standard music piece may be inserted. . When inserting a music, it is preferable to determine the order of performance in consideration of the priority of the music and perform extraction.
  • the CPU 11 performs a connection process for smoothly connecting the extracted performance sections.
  • the connection process for example, three types of methods such as joining, cross-fading, and bridge are applicable.
  • the joining method is a method in which the next performance section is started in synchronization with the end timing of a certain performance section. This method is possible when the volume, tempo, tonality (key), etc. of the preceding and following performance sections all match. For example, the first chorus and the third chorus of the same song are connected.
  • Crossfade is a method of performing by overlapping the previous performance section and the next performance section in parallel. At this time, the volume of the previous performance section is gradually decreased, and the volume of the next performance section is gradually increased. In this case, not only the volume but also the performance tempo can be gradually changed from the tempo of the previous performance section to the tempo of the next performance section, so that smoother connection can be performed.
  • crossfading is performed, as shown in FIG. 9, only the overlapped portion shortens the overall performance time. Therefore, when the total performance time of all the music pieces is longer than the remaining time, crossfading is performed.
  • the bridge system is a system in which a phrase (bridge part) is inserted between the previous performance section and the next performance section. If the bridge portion is inserted, the overall performance time is lengthened. Therefore, when the total performance time of all the music pieces is shorter than the remaining time, a bridge part is inserted.
  • the bridge section is automatically generated based on the rhythm and chords of the previous performance section and the next performance section. For example, a drum sound that gradually shifts from the volume of the previous performance section to the volume of the next performance section and gradually shifts from the tempo of the previous performance section to the tempo of the next performance section is generated.
  • the time signature is different between the previous performance section and the next performance section, insert a note (such as a syncopation or a two-thirds triplet) or a rest that eliminates the sense of time, and change the time signature.
  • a note such as a syncopation or a two-thirds triplet
  • a rest that eliminates the sense of time
  • the CPU 11 adjusts the tempo of each piece of music in accordance with the remaining time, so that in addition to the adjustment of the performance time by inserting the crossfade or the bridge portion, or the performance time by the insertion of the crossfade or the bridge portion. Instead of adjustment, it is desirable to generate a medley that finishes playing at the exact remaining time. Increasing the tempo shortens the performance time as shown in FIG. 9, and conversely, decreasing the tempo increases the performance time.
  • the tempo is changed so that the performance ends when the total performance time is 480000 msec, that is, exactly 8 minutes.
  • the total performance time before adjustment is 507164 msec, which is 27164 msec longer than the target performance time. No. 5 ⁇ No. 1 ⁇ No. 6 ⁇ No. 2 ⁇ No. 4 ⁇ No. If you change the tempo one song at a time in the order of No. 3, no.
  • the total performance time becomes 478456 msec, which is shorter than the target performance time.
  • the total performance time is shorter than the target performance time by a predetermined time (for example, 1000 msec). It is assumed that the tempo of song 3 is not changed. However, no.
  • the total performance time is 482826 msec, which is longer than the target performance time by a predetermined time (for example, 1000 msec). Therefore, the CPU 11 After changing the tempo of 4, the tempo is changed in order from the song with the shortest performance time. In the example of FIG. No. 1 has the shortest performance time, so no. Change the tempo of song 1. As a result, the total performance time is 480802 msec, which substantially matches the target performance time (below ⁇ 1000 msec).
  • the CPU 11 changes the tempo as described above, and generates a medley in which the performance ends just after the remaining time.
  • a song sung by a user with a high job title may decide whether or not to change the tempo of each song according to user information, such as not changing the tempo.
  • a reproduction reservation (selection) of music is accepted (s1).
  • medley is generated (music data is edited) (s2).
  • the overall operation of karaoke device medley generation and performance will be described with reference to a flowchart.
  • the CPU 11 receives an instruction to generate a medley via the touch panel 15, the operation unit 25, or the remote controller 9 (s ⁇ b> 11), first, the CPU 11 reads the song number of the reserved song registered in the reserved list in the RAM 12. (S12). Then, the CPU 11 acquires user information (s13), acquires the current time, and calculates the remaining time (s14). The remaining time may be directly input by the user, may be downloaded from the host 5, or may be calculated automatically by the CPU 11 by obtaining the end time from the host 5.
  • the end time When the end time is acquired from the host 5, when the current time approaches the end time (for example, when the remaining time is 10 minutes), the remaining time is displayed on the monitor 24 and a display for recommending medley is displayed. To prompt the user to generate a medley.
  • the remaining time that the user wants to medley may be set in advance, and when the actual remaining time reaches the set remaining time, the CPU 11 may automatically execute the process of shifting to s12.
  • the CPU 11 performs a medley generation process (s15).
  • the medley generation process includes the generation of the analysis data shown in FIG. 4, the setting of the music order based on the analysis data, and FIGS. 7A, 7B, and 7C. It consists of a performance section extraction process for each song based on the extraction order shown, a connection process for each performance section, and a fine adjustment by tempo adjustment.
  • the CPU 11 When the CPU 11 finishes the medley generation process, the CPU 11 cancels all the current reserved songs (s16) and ends the currently playing song (s17). Note that the currently playing song may be played to the end without ending in the middle. In this case, a medley is generated according to the remaining time after the currently played song ends. In addition, in order for the user to recognize the currently playing song as a part of the medley, the current song may be faded out and the medley may be faded in.
  • the CPU 11 registers the medley in the reservation list as the next performance music (s18) and performs the medley (s19). At this time, a display to start the performance of the medley may be displayed on the monitor 24.
  • the karaoke device generates and plays a medley, so that the user can sing a plurality of reserved songs while being excited within the remaining time.
  • a karaoke apparatus provided with the music editing apparatus of the present invention has been shown.
  • a musical sound reproducing apparatus for reproducing other general music data including audio data such as MP3.
  • the medley can be generated, and the music editing apparatus of the present invention can be realized using a general information processing apparatus such as a PC or a smartphone.
  • the portable audio player includes a CPU 51, a RAM 12, a ROM 53, an operation unit 55, a network I / F 14, SS 20, and a speaker 21.
  • the ROM 53 stores music data (here, audio data such as MP3).
  • the operation unit 55 accepts a music data reproduction reservation.
  • the CPU 51 performs a medley generation process.
  • the receiving means and editing means of the present invention are realized. That is, as shown by a broken line in the figure, in this example, the music editing apparatus of the present invention is realized by the CPU 51, the RAM 12, and the operation unit 55.
  • the portable audio player when listening to a song using a portable audio player, a smartphone, a navigation system, or the like, which is a musical sound reproduction device, on a train or car heading to the destination, the portable audio player, the smartphone, the navigation system, etc. It is possible to edit the music in order of excitement according to the scheduled arrival time. In particular, it is possible to change the order of the songs according to the current position. For example, if there is a song registered in the regional ranking shown in FIG. 5 in the region corresponding to the current position, the score of the song registered in the regional ranking is increased.
  • the position of the device itself may be detected using GPS or the like, or the position corresponding to the current time is estimated from the departure place, departure time, destination, and route from the departure place to the destination. It is also possible to do.
  • the song data is stored in the own device (the HDD 13 of the karaoke device 7 or the ROM 53 of the portable audio player), and selection of a plurality of song data is received from the song data stored in the own device.
  • the music data it is not always necessary to store the music data in its own device. For example, after downloading a database (shown in FIG. 3 (C) or FIG. 4) corresponding to the received selected music from the center 1 and performing performance section extraction processing for each music, the music in the corresponding performance section It is also possible to generate a medley by downloading only the data from the center 1 and executing a fine adjustment by a connection process and a tempo adjustment.
  • the music data may be subjected to connection processing and tempo adjustment after all necessary data has been downloaded, but may be configured to perform connection processing and tempo adjustment each time while streaming the necessary data.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Reverberation, Karaoke And Other Acoustics (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

In the present invention, a song editing device changes the order of songs on the basis of the priority of each song and edits so that the playback of songs ends within the remaining time (a prescribed limited time), and as a result, songs can be edited so that within the remaining time the songs are played back in the order of the excitement caused thereby. For example, because highly popular songs and standards cause excitement, these songs are given high priority, and reordering is carried out so that the higher the priority of the song the later that song is placed in the playback order, and as a result excitement builds as the remaining time becomes less.

Description

楽曲編集装置および楽曲編集システムMusic editing apparatus and music editing system
 この発明は、楽曲の編集を行う楽曲編集装置および楽曲編集システムに関する。 The present invention relates to a music editing apparatus and music editing system for editing music.
 一般に、カラオケボックスでは、カラオケシステムを利用する時間が限られている。そのため、残り時間内に予約された全楽曲の演奏を終了できない場合がある。そこで、例えば特許文献1には、予約された楽曲の一部を抽出して部分的に演奏を行うことで、予約された全楽曲の演奏を残り時間内に行うカラオケ装置が提案されている。 Generally, karaoke boxes have limited time to use the karaoke system. Therefore, there may be a case where the performance of all the music reserved within the remaining time cannot be finished. Therefore, for example, Patent Document 1 proposes a karaoke apparatus that performs a performance of all reserved music pieces within the remaining time by extracting a part of the reserved music pieces and performing partly.
日本国特開2001-67085号公報Japanese Unexamined Patent Publication No. 2001-67085
 しかし、特許文献1の装置のように楽曲の一部を抽出するだけでは、単に曲が短くなっただけであり、盛り上がりに欠ける。 However, just extracting a part of the music piece as in the device of Patent Document 1 simply shortens the music piece and lacks excitement.
 そこで、この発明は、選択された複数の楽曲を、残り時間内で盛り上がる順に編集可能な楽曲編集装置を提供することを目的とする。 Therefore, an object of the present invention is to provide a music editing apparatus capable of editing a plurality of selected music pieces in order of increasing in the remaining time.
 この発明の楽曲編集装置は、複数の楽曲データの選択を受け付ける受付手段と、各楽曲データの優先度に基づいて、前記受付手段が受け付けた選択された楽曲データに対応する選択曲の順序を並べ替えるとともに、所定の制限時間に応じて各楽曲データの編集を行う編集手段を備えたことを特徴とする。 The music editing apparatus of the present invention arranges the order of the selected music corresponding to the selected music data received by the receiving means based on the receiving means for receiving selection of a plurality of music data and the priority of each music data. In addition, an editing means is provided for editing each piece of music data in accordance with a predetermined time limit.
 このように、本発明の楽曲編集装置では、各曲の優先度に基づいて曲順を並び替え、残り時間(所定の制限時間)内に楽音再生が終了するように編集を行うため、残り時間内で盛り上がる順に楽曲データを編集することができる。例えば、知名度の高い曲や定番の曲は盛り上がるため、これらに高い優先度を設定しておき、後半になるほど高い優先度の楽曲が配置されるように並び替えを行うと、残り時間が少なくなるほど盛り上がることになる。 As described above, in the music editing apparatus of the present invention, the order of songs is rearranged based on the priority of each song, and editing is performed so that the music playback ends within the remaining time (predetermined time limit). The song data can be edited in the order in which it rises. For example, because high-profile songs and classic songs are exciting, if you set high priority to them and rearrange them so that songs with higher priority are arranged in the second half, the remaining time decreases. It will be exciting.
 なお、優先度は、各楽曲データの再生履歴に応じた情報が含まれてもよい。再生履歴に応じた情報としては、例えば年間ランキング、月間ランキング、週間ランキング地域別ランキング、男女別ランキング、年齢別ランキング等が含まれ、ランキングの高い曲ほど高い優先度が設定されることが望ましい。あるいは、過去にランキングの高かった楽曲は、定番の曲として高い優先度が設定されることが望ましい。 Note that the priority may include information corresponding to the reproduction history of each piece of music data. Information according to the playback history includes, for example, annual ranking, monthly ranking, weekly ranking, regional ranking, gender ranking, age ranking, etc., and it is desirable to set a higher priority for songs with higher ranking. Alternatively, it is desirable that a song with a high ranking in the past be set with a high priority as a classic song.
 また、楽曲編集装置は、ユーザ情報を記憶するユーザ情報記憶手段を備え、各楽曲の優先度だけでなく、前記ユーザ情報に基づいて前記選択曲の順序を並べ替えるとともに、前記所定の制限時間に応じて各楽曲データの編集を行ってもよい。例えば、ユーザ毎の再生履歴において、再生頻度の高い楽曲データに高い優先度が設定される。 In addition, the music editing apparatus includes user information storage means for storing user information, and rearranges the order of the selected music based on the user information as well as the priority of each music, and at the predetermined time limit. Accordingly, each piece of music data may be edited. For example, in the reproduction history for each user, a high priority is set for music data having a high reproduction frequency.
 なお、楽曲の編集手法としては、以下の様な態様が考えられる。各楽曲データは、構成区間(前奏、サビ、間奏、等)を示す情報が含まれ、編集手段は、並べ替えた選択曲の順序に応じて、各構成区間に抽出順を設定する。例えば、サビは最も盛り上がる構成区間であるため、サビの構成区間を最初に抽出し、次に当該サビの構成区間を含むコーラス部分を抽出する、等の手法で編集を行う。 Note that the following modes can be considered as a music editing method. Each piece of music data includes information indicating a constituent section (prelude, chorus, interlude, etc.), and the editing means sets an extraction order for each constituent section in accordance with the order of the selected music that has been rearranged. For example, since the chorus is the most prominent component section, the chorus component section is extracted first, and then the chorus part including the chorus component section is extracted, and then editing is performed.
 また、抽出後の楽曲の再生時間が残り時間より少し超える場合には、例えばテンポを速くして再生時間が短くしてもよい。 Also, if the playback time of the music after extraction slightly exceeds the remaining time, the playback time may be shortened by increasing the tempo, for example.
 抽出後の楽曲の再生時間が残り時間より大幅に短い場合には、選択曲以外の曲(例えば定番の楽曲)を挿入する態様も可能である。楽曲を挿入する場合には、その楽曲の優先度も考慮して演奏順の決定を行い、編集を行うことが好ましい。 When the playback time of the music after extraction is significantly shorter than the remaining time, it is possible to insert a music other than the selected music (for example, a standard music). When inserting music, it is preferable to determine the order of performance in consideration of the priority of the music and perform editing.
 なお、楽曲編集装置がカラオケ装置である場合、ユーザ情報は、歌唱履歴が含まれていることが望ましい。歌唱履歴としては、例えば来店数、各楽曲の採点結果、等である。例えば、採点結果の高い歌唱者が後半に歌唱すると盛り上がるため、後半になるほど採点結果の高い歌唱者が歌唱する曲を配置する態様が望ましい。
 以上の各態様に係る楽曲編集装置は、楽曲データの処理に専用されるDSP(Digital Signal Processor)などのハードウェア(電子回路)によって実現されるほか、CPU(Central Processing Unit)等の汎用の演算処理装置とプログラムとの協働によっても実現される。
 具体的には、本発明に係るプログラムは、複数の楽曲データの選択を受け付ける受付処理と、各楽曲データの優先度に基づいて、前記受付手段が受け付けた選択された楽曲データに対応する選択曲の順序を並べ替えるとともに、所定の制限時間に応じて各楽曲データの編集を行う編集処理とを、コンピュータに実行させる。
 以上の態様に係るプログラムは、コンピュータが読取可能な記録媒体に格納された形態で提供されてコンピュータにインストールされ得る。記録媒体は、例えば非一過性(non-transitory)の記録媒体であり、CD-ROM等の光学式記録媒体(光ディスク)が好例であるが、半導体記録媒体や磁気記録媒体等の公知の任意の形式の記録媒体を包含し得る。また、例えば、本発明のプログラムは、通信網を介した配信の形態で提供されてコンピュータにインストールされ得る。
 また、本発明の一様態によれば、複数の楽曲データの選択を受け付け、各楽曲データの優先度に基づいて、前記選択された楽曲データに対応する選択曲の順序を並べ替えるとともに、所定の制限時間に応じて各楽曲データの編集を行う楽曲編集方法が提供される。
When the music editing device is a karaoke device, it is desirable that the user information includes a singing history. As the singing history, for example, the number of visits to the store, the scoring result of each song, and the like. For example, since a singer with a high scoring result swells when sung in the second half, it is desirable that a song sung by a singer with a high scoring result be arranged in the second half.
The music editing apparatus according to each aspect described above is realized by hardware (electronic circuit) such as a DSP (Digital Signal Processor) dedicated to music data processing, and a general-purpose operation such as a CPU (Central Processing Unit). This is also realized by cooperation between the processing device and the program.
Specifically, the program according to the present invention includes a reception process for receiving selection of a plurality of music data, and a selected music corresponding to the selected music data received by the receiving means based on the priority of each music data. And the editing process for editing each piece of music data in accordance with a predetermined time limit.
The program according to the above aspect can be provided in a form stored in a computer-readable recording medium and installed in the computer. The recording medium is, for example, a non-transitory recording medium, and an optical recording medium (optical disk) such as a CD-ROM is a good example, but a known arbitrary one such as a semiconductor recording medium or a magnetic recording medium This type of recording medium can be included. For example, the program of the present invention can be provided in the form of distribution via a communication network and installed in a computer.
According to one aspect of the present invention, the selection of a plurality of music data is accepted, the order of the selected music corresponding to the selected music data is rearranged based on the priority of each music data, A music editing method for editing each music data according to the time limit is provided.
 この発明によれば、選択された複数の楽曲を、残り時間内で盛り上がる順に編集可能である。 According to this invention, it is possible to edit a plurality of selected music pieces in the order of excitement within the remaining time.
図1は、カラオケシステムの構成を示すブロック図である。FIG. 1 is a block diagram showing the configuration of the karaoke system. 図2は、カラオケ装置の構成を示すブロック図である。FIG. 2 is a block diagram showing the configuration of the karaoke apparatus. 図3(A)、図3(B)および図3(C)は、楽曲データおよびデータベースの構造を示す図である。FIG. 3A, FIG. 3B, and FIG. 3C are diagrams showing the structure of music data and a database. 図4は、解析データを示した図である。FIG. 4 is a diagram showing analysis data. 図5は、ランキング情報を示した図である。FIG. 5 is a diagram showing ranking information. 図6は、ユーザ情報を示した図である。FIG. 6 is a diagram showing user information. 図7(A)、図7(B)および図7(C)は、抽出順を示す図である。FIG. 7A, FIG. 7B, and FIG. 7C are diagrams showing the order of extraction. 図8は、ユーザ毎の優先順位を示す図である。FIG. 8 is a diagram showing the priority order for each user. 図9は、クロスフェードおよびテンポ調整の手法を示す図である。FIG. 9 is a diagram showing a method of crossfade and tempo adjustment. 図10は、テンポ調整による演奏時間の変化を示す図である。FIG. 10 is a diagram showing changes in performance time due to tempo adjustment. 図11は、メドレーの生成、演奏の全体動作を示すフローチャートである。FIG. 11 is a flowchart showing the overall operation of medley generation and performance. 図12は、携帯オーディオプレーヤの構成を示すブロック図である。FIG. 12 is a block diagram showing the configuration of the portable audio player. 図13は、本発明の一様態である楽曲編集装置を示すブロック図である。FIG. 13 is a block diagram showing a music editing apparatus which is an embodiment of the present invention. 図14は、本発明の一様態である楽曲編集方法を示すフローチャートである。FIG. 14 is a flowchart showing a music editing method which is an embodiment of the present invention.
 図1は、本実施形態に係るカラオケシステムの構成を示す図である。カラオケシステムは、インターネット等のネットワーク2を介して接続されるセンタ(サーバ)1と、複数のカラオケ店舗3と、からなる。 FIG. 1 is a diagram showing a configuration of a karaoke system according to the present embodiment. The karaoke system includes a center (server) 1 connected via a network 2 such as the Internet and a plurality of karaoke stores 3.
 各カラオケ店舗3には、ネットワーク2に接続されるホスト5と、ホスト5を介してネットワーク2に接続される複数のカラオケ装置7が設けられている。ホスト5は、ルータ等の中継機の機能を兼ねている。ただし、中継機は別装置として設けられていてもよい。ホスト5は、カラオケ店舗の管理室内等に設置されている。複数台のカラオケ装置7は、それぞれ個室(カラオケボックス)に1台ずつ設置されている。また、各カラオケ装置7には、それぞれリモコン9が設置されている。 Each karaoke store 3 is provided with a host 5 connected to the network 2 and a plurality of karaoke devices 7 connected to the network 2 via the host 5. The host 5 also functions as a relay device such as a router. However, the repeater may be provided as a separate device. The host 5 is installed in a management room of a karaoke store. A plurality of karaoke apparatuses 7 are installed in each private room (karaoke box). Each karaoke device 7 is provided with a remote controller 9.
 図2は、カラオケ装置の構成を示すブロック図である。カラオケ装置7は、装置全体の動作を制御するCPU11、および当該CPU11に接続される各種構成部からなる。CPU11には、RAM12、HDD13、ネットワークインタフェース(I/F)14、LCD(タッチパネル)15、A/Dコンバータ17、音源18、ミキサ(エフェクタ)19、MPEG等のデコーダ22、表示処理部23、操作部25、および送受信部26が接続されている。 FIG. 2 is a block diagram showing the configuration of the karaoke apparatus. The karaoke apparatus 7 includes a CPU 11 that controls the operation of the entire apparatus, and various components connected to the CPU 11. The CPU 11 includes a RAM 12, an HDD 13, a network interface (I / F) 14, an LCD (touch panel) 15, an A / D converter 17, a sound source 18, a mixer (effector) 19, a decoder 22 such as an MPEG, a display processing unit 23, an operation The unit 25 and the transmission / reception unit 26 are connected.
 HDD13は、CPU11の動作用プログラムが記憶されている。ワークメモリであるRAM12には、CPU11の動作用プログラムを実行するために読み出すエリアやカラオケ曲を演奏するために楽曲データを読み出すエリア、予約リスト等が設定される。また、HDD13は、カラオケ曲を演奏するための楽曲データやモニタ24に背景映像を表示するための映像データ等を記憶している。映像データは動画、静止画の両方を記憶している。楽曲データや映像データは、定期的にセンタ1から配信され、更新される。 The HDD 13 stores an operation program for the CPU 11. In the RAM 12, which is a work memory, an area to be read for executing the operation program of the CPU 11, an area for reading music data to play karaoke music, a reservation list, and the like are set. The HDD 13 stores music data for playing karaoke music, video data for displaying a background video on the monitor 24, and the like. Video data stores both moving images and still images. Music data and video data are periodically distributed from the center 1 and updated.
 また、HDD13は、各楽曲データについて、各種情報(歌手名、曲名、ジャンル、楽曲構成)が曲番号に対応付けられて格納されたデータベースを構築している(図3(B)および図3(C)を参照)。データベースは、センタ1から各種情報が配信されることにより更新される。 The HDD 13 constructs a database in which various pieces of information (singer name, song name, genre, song composition) are stored in association with song numbers for each song data (FIG. 3B and FIG. 3). See C)). The database is updated as various information is distributed from the center 1.
 CPU11は、カラオケ装置を統括的に制御する制御部であり、機能的にシーケンサを内蔵し、カラオケ演奏を行う。また、CPU11は、音声信号生成処理、映像信号生成処理、およびメドレー生成処理を行う。 The CPU 11 is a control unit that comprehensively controls the karaoke apparatus and functionally incorporates a sequencer to perform karaoke performance. Further, the CPU 11 performs an audio signal generation process, a video signal generation process, and a medley generation process.
 タッチパネル15および操作部25は、カラオケ装置の前面に設けられている。CPU11は、タッチパネル15から入力される操作情報に基づいて、操作情報に応じた画像をタッチパネル15上に表示し、GUIを実現する。また、リモコン9も同様のGUIを実現するものである。CPU11は、タッチパネル15、操作部25、または送受信部26を介してリモコン9から入力される操作情報に基づいて、各種の動作を行う。例えば、ユーザがタッチパネル15、操作部25、またはリモコン9を用いて、楽曲データの選択(再生予約)を受け付けると、RAM12の予約リストに選択された楽曲データに対応する楽曲が予約される。つまり、タッチパネル15、操作部25、または送受信部26が、図13に示されるように本発明の受付手段に相当する。また、ユーザがタッチパネル15、操作部25、またはリモコン9を用いてメドレー生成の指示を行うと、CPU11は、メドレー生成処理を行う。すなわち、CPU11は、図13に示されるように本発明の編集手段に相当する。CPU11、RAM12、HDD13、およびリモコン9等により、本発明の楽曲編集装置が実現される。 The touch panel 15 and the operation unit 25 are provided on the front surface of the karaoke apparatus. The CPU 11 displays an image corresponding to the operation information on the touch panel 15 based on the operation information input from the touch panel 15 to realize a GUI. The remote controller 9 also realizes the same GUI. The CPU 11 performs various operations based on operation information input from the remote controller 9 via the touch panel 15, the operation unit 25, or the transmission / reception unit 26. For example, when the user accepts selection (reproduction reservation) of music data using the touch panel 15, the operation unit 25, or the remote controller 9, the music corresponding to the music data selected in the reservation list in the RAM 12 is reserved. That is, the touch panel 15, the operation unit 25, or the transmission / reception unit 26 corresponds to the reception unit of the present invention as shown in FIG. When the user gives an instruction to generate a medley using the touch panel 15, the operation unit 25, or the remote controller 9, the CPU 11 performs a medley generation process. That is, the CPU 11 corresponds to the editing means of the present invention as shown in FIG. The music editing apparatus of the present invention is realized by the CPU 11, the RAM 12, the HDD 13, the remote controller 9, and the like.
 メドレー生成処理については後述するが、楽曲(楽曲データ)の選択(再生予約)を受け付けた後、予約された楽曲の演奏順を入れ替え、各楽曲の一部を抽出して、滑らかにつなげた新たな1つの楽曲を生成する処理である。 The medley generation process will be described later. After accepting the selection (reproduction reservation) of the music (music data), the performance order of the reserved music is changed, a part of each music is extracted, and the new music is connected smoothly. This is a process for generating a single piece of music.
 次に、カラオケ演奏を行うための構成について説明する。上述したように、CPU11は、機能的にシーケンサを内蔵している。CPU11は、RAM12の予約リストに登録された予約曲の曲番号に対応する楽曲データをHDD13から読み出し、シーケンサでカラオケ演奏を行う。 Next, a configuration for performing karaoke performance will be described. As described above, the CPU 11 functionally includes a sequencer. The CPU 11 reads music data corresponding to the music number of the reserved music registered in the reserved list in the RAM 12 from the HDD 13, and performs a karaoke performance with the sequencer.
 楽曲データは、例えば図3(A)に示すように、曲番号等が書き込まれているヘッダ、演奏用MIDIデータが書き込まれている楽音トラック、ガイドメロディ用MIDIデータが書き込まれているガイドメロディトラック、歌詞用MIDIデータが書き込まれている歌詞トラック、バックコーラス再生タイミングおよび再生すべき音声データが書き込まれているコーラストラック、等からなっている。なお、楽曲データの形式としては、この例に限るものではない。 For example, as shown in FIG. 3A, the music data includes a header in which music numbers are written, a musical sound track in which performance MIDI data is written, and a guide melody track in which MIDI data for guide melody is written. , A lyrics track in which lyric MIDI data is written, a back chorus playback timing, a chorus track in which audio data to be played back is written, and the like. Note that the format of the music data is not limited to this example.
 シーケンサは、楽音トラックやガイドメロディトラックのデータに基づいて音源18を制御し、カラオケ曲の楽音を発生する。また、シーケンサは、コーラストラックの指定するタイミングでバックコーラスの音声データ(楽曲データに付随しているMP3等の圧縮音声データ)を再生する。また、シーケンサは、歌詞トラックに基づいて曲の進行に同期して歌詞の文字パターンを合成し、この文字パターンを映像信号に変換して表示処理部23に入力する。 The sequencer controls the sound source 18 based on the data of the musical tone track and the guide melody track, and generates the musical tone of the karaoke song. The sequencer also reproduces the back chorus audio data (compressed audio data such as MP3 attached to the music data) at the timing designated by the chorus track. Further, the sequencer synthesizes the character pattern of the lyrics in synchronism with the progress of the song based on the lyrics track, converts the character pattern into a video signal, and inputs it to the display processing unit 23.
 音源18は、シーケンサの処理によってCPU11から入力されたデータ(ノートイベントデータ)に応じて楽音信号(デジタル音声信号)を形成する。形成した楽音信号はミキサ19に入力される。 The sound source 18 forms a musical sound signal (digital audio signal) according to data (note event data) input from the CPU 11 by processing of the sequencer. The formed tone signal is input to the mixer 19.
 ミキサ19は、音源18が発生した楽音信号、コーラス音、およびマイク16からA/Dコンバータ17を介して入力された歌唱者の歌唱音声信号に対してエコーなどの効果を付与するとともに、これらの信号をミキシングする。 The mixer 19 gives effects such as echo to the musical sound signal generated by the sound source 18, the chorus sound, and the singing voice signal of the singer input from the microphone 16 via the A / D converter 17. Mix the signal.
 ミキシングされた各デジタル音声信号はサウンドシステム(SS)20に入力される。サウンドシステム20は、D/Aコンバータおよびパワーアンプを内蔵しており、入力されたデジタル信号をアナログ信号に変換して増幅し、スピーカ21から放音する。ミキサ19が各音声信号に付与する効果およびミキシングのバランスは、CPU11によって制御される。 Each mixed digital audio signal is input to the sound system (SS) 20. The sound system 20 incorporates a D / A converter and a power amplifier, converts an input digital signal into an analog signal, amplifies it, and emits sound from the speaker 21. The effect that the mixer 19 gives to each audio signal and the balance of mixing are controlled by the CPU 11.
 CPU11は、上記シーケンサによる楽音の発生、歌詞テロップの生成と同期して、HDD13に記憶されている映像データを読み出して背景映像等を再生する。動画の映像データは、MPEG形式にエンコードされている。CPU11は、読み出した映像データをデコーダ22に入力する。デコーダ22は、入力されたMPEG等のデータを映像信号に変換して表示処理部23に入力する。表示処理部23には、背景映像の映像信号以外に上記歌詞テロップの文字パターン等が入力される。表示処理部23は、背景映像の映像信号の上に歌詞テロップなどをOSDで合成してモニタ24に出力する。モニタ24は、表示処理部23から入力された映像信号を表示する。 The CPU 11 reads the video data stored in the HDD 13 and reproduces the background video and the like in synchronism with the generation of musical sounds and the generation of the lyrics telop by the sequencer. The video data of the moving image is encoded in the MPEG format. The CPU 11 inputs the read video data to the decoder 22. The decoder 22 converts the input data such as MPEG into a video signal and inputs it to the display processing unit 23. In addition to the video signal of the background video, the text processing pattern of the lyrics telop is input to the display processing unit 23. The display processing unit 23 synthesizes a lyrics telop or the like on the video signal of the background video by using the OSD and outputs it to the monitor 24. The monitor 24 displays the video signal input from the display processing unit 23.
 以上の様にして、カラオケ演奏が行われる。なお、CPU11は、曲の演奏開始または終了時に、データベースの演奏した曲における演奏回数をカウントアップし、HDD13に演奏履歴をログ情報(履歴)として記憶させる。ログ情報は、定期的にセンタ1に送信され、全国のカラオケ装置7のログ情報がセンタ1でランキング情報(図5を参照)として集計される。 As mentioned above, karaoke performance is performed. Note that the CPU 11 counts up the number of performances of the music performed in the database at the start or end of performance of the music, and stores the performance history as log information (history) in the HDD 13. The log information is periodically transmitted to the center 1, and the log information of the karaoke apparatuses 7 nationwide is tabulated as ranking information (see FIG. 5) at the center 1.
 次に、メドレーの生成について説明する。ユーザがタッチパネル15、操作部25、またはリモコン9を用いて、楽曲の再生予約を行った後、ユーザがタッチパネル15、操作部25、またはリモコン9を用いてメドレー生成の指示を行うと、CPU11は、メドレー生成処理を行う。CPU11は、RAM12の予約リストに登録された予約曲の曲番号を読み出し、各曲の優先度に基づいて曲順を並び替え、カラオケシステムを利用可能な残り時間(所定の制限時間)内に演奏が終了するように編集を行うことでメドレーを生成する。各曲の優先度は、カラオケにおける盛り上がりに対応するものとなっている。 Next, medley generation will be described. When the user makes a music reproduction reservation using the touch panel 15, the operation unit 25, or the remote control 9, and the user instructs the medley generation using the touch panel 15, the operation unit 25, or the remote control 9, the CPU 11 The medley generation process is performed. The CPU 11 reads the song numbers of the reserved songs registered in the reserved list in the RAM 12, rearranges the order of the songs based on the priority of each song, and performs within the remaining time (predetermined time limit) in which the karaoke system can be used. The medley is generated by editing so as to end. The priority of each song corresponds to the excitement in karaoke.
 図4は、各曲の優先度およびユーザ情報に基づいて生成した解析データを示す図である。この図では、6曲が予約リストに登録され、3名のユーザが歌唱を行う例を示すものである。図4に示すように、各楽曲は、再生履歴(演奏履歴)に応じた優先度として、項目毎に得点が設定されている。演奏履歴は、年間ランキング、月間ランキング、週間ランキング、地域別ランキング、男女別ランキング、年齢別ランキング等が含まれ、ランキングの高い曲ほど高い得点が設定される。 FIG. 4 is a diagram showing analysis data generated based on the priority of each song and user information. This figure shows an example in which six songs are registered in the reservation list and three users sing. As shown in FIG. 4, each piece of music has a score set for each item as a priority according to the reproduction history (performance history). The performance history includes annual rankings, monthly rankings, weekly rankings, regional rankings, gender rankings, age rankings, etc., and higher scores are set for songs with higher rankings.
 図5は、ランキング情報を示した図である。ランキング情報は、定期的にセンタ1からダウンロードされ、HDD13に記憶される。あるいは、ランキング情報は、解析データを生成するタイミングでセンタ1に問い合わせ、ダウンロードするようにしてもよい(これにより、HDD13が本発明の歌唱履歴記憶手段を実現する)。各楽曲は、10位以内に登録された数、100位以内に登録された数に応じて得点が設定される。例えば、図4に示したNo.2「曲番号:157270」の曲は、図5に示すように週間ランキングの99位、月間ランキングの100位、および年間ランキング99位に登録されている。したがって、図4の「100位以内」の欄において10点×3回=30点が設定される。また、No.2「曲番号:157270」の曲は、図5に示すように定番(スタート)ランキングの4位、定番(シメ)ランキングの3位に登録されている。したがって、図4の「定番」の欄において10点×2回=20点が設定される。CPU11は、このようにして、予約リストに登録された曲番号から、各曲の得点を算出し、図4に示す解析データを生成する。 FIG. 5 is a diagram showing ranking information. The ranking information is periodically downloaded from the center 1 and stored in the HDD 13. Alternatively, the ranking information may be downloaded by inquiring of the center 1 at the timing of generating the analysis data (the HDD 13 realizes the singing history storage means of the present invention). Each musical piece has a score set according to the number registered within the 10th place and the number registered within the 100th place. For example, as shown in FIG. 2. As shown in FIG. 5, the song “Song number: 157270” is registered in the 99th place in the weekly ranking, the 100th place in the monthly ranking, and the 99th place in the annual ranking. Therefore, 10 points × 3 times = 30 points are set in the column “within 100” in FIG. No. 2. As shown in FIG. 5, the song “Song number: 157270” is registered in the fourth place of the standard (start) ranking and the third place of the standard (shime) ranking. Therefore, 10 points × 2 times = 20 points are set in the “standard” column of FIG. In this way, the CPU 11 calculates the score of each song from the song numbers registered in the reservation list, and generates analysis data shown in FIG.
 さらに、この例では、解析データにユーザ情報に基づく得点が設定されている。図4に示すように、ユーザ情報に基づく得点の項目としては、「歌唱者」、「他者履歴」、「95点以上」および「90点以上」が設定されている。さらに、「歌唱者」の得点には「年齢」、「組織」および「来店」が設定されている。 Furthermore, in this example, a score based on user information is set in the analysis data. As shown in FIG. 4, “singer”, “other history”, “95 points or more”, and “90 points or more” are set as score items based on user information. Further, “age”, “organization” and “visit” are set for the score of “singer”.
 図6は、ユーザ情報を示す図である。ユーザ情報は、センタ1に記憶されている。CPU11は、ユーザが入店した際にセンタ1からユーザ情報をダウンロードし、RAM12に記憶する(これにより、RAM12が本発明のユーザ情報記憶手段を実現する)。ユーザ情報は、各ユーザのID、名前、生年月日、性別、家族、出身校、所属先、役職、歌唱履歴、お気に入り、等の情報が含まれている。CPU11は、これらユーザ情報に基づいて解析データを生成する。例えば、図4に示すように、ユーザ「・・・A3」、ユーザ「・・・B1」およびユーザ「・・・CC」のうち、年齢の高い者に得点を付与する。この例では、最も年齢の高いユーザ「・・・B1」に10点、次に年齢の高いユーザ「・・・CC」に5点を設定している。また、この例では、全てのユーザが同じ所属先であるため、役職の高いユーザ「・・・B1」に「組織」の点として、高い得点(30点)を設定している。仮に出身校(例えば小学校)が全てのユーザで同じであれば、最も年齢の高い者に高い得点を設定すればよい(年齢と同じ得点を設定してもよい)。あるいは、全てのユーザが同じ家族であれば、いずれか1名(例えば父親)に高い得点を設定してもよい。さらに、CPU11は、「歌唱履歴」に登録された来店回数に応じて配点を行う。例えばユーザ「・・・A3」は12回来店しているため、12点を設定している。CPU11は、これら「年齢」、「組織」、および「来店」の得点を合計し、歌唱者の得点を設定する。 FIG. 6 is a diagram showing user information. User information is stored in the center 1. When the user enters the store, the CPU 11 downloads user information from the center 1 and stores it in the RAM 12 (thereby, the RAM 12 implements the user information storage means of the present invention). The user information includes information such as each user's ID, name, date of birth, gender, family, school of origin, affiliation, title, singing history, favorites, and the like. The CPU 11 generates analysis data based on these user information. For example, as shown in FIG. 4, among the users “... A3”, the user “... B1”, and the user “. In this example, 10 points are set for the oldest user “... B1”, and 5 points are set for the next older user “... CC”. In this example, since all the users belong to the same affiliation, a high score (30 points) is set as the “organization” for the user “... If the school of origin (for example, elementary school) is the same for all users, a high score may be set for the oldest person (same score as age may be set). Or if all the users are the same family, you may set a high score to any one person (for example, father). Further, the CPU 11 assigns points according to the number of visits registered in the “singing history”. For example, since the user “... A3” has visited the store 12 times, 12 points are set. The CPU 11 sums up the scores of these “age”, “organization”, and “visit” to set the score of the singer.
 また、図4の「他者履歴」に示すように、各予約曲について、歌唱する予定のユーザ以外に、過去に歌唱した履歴がある場合には、歌唱した回数に応じて得点を設定する。例えば、No.3「曲番号:176304」については、ユーザ「・・・CC」が歌唱する予定であるが、他のユーザ「・・・A3」およびユーザ「・・・B1」の歌唱履歴が存在するため、「他者履歴」に10点×2=20点が設定される。 In addition, as shown in “others history” in FIG. 4, for each reserved song, if there is a history of singing in addition to a user who is scheduled to sing, a score is set according to the number of times of singing. For example, no. 3 “Song No .: 176304” is scheduled to be sung by the user “... CC”, but there are singing histories of other users “... A 3” and the user “. 10 points × 2 = 20 points are set in the “others history”.
 さらに、解析データには、各ユーザの過去の採点結果に基づいた得点が設定される。例えば、ユーザ「・・・A3」は、No.1「曲番号:124504」について95点以上の採点結果を有しているため、「95点以上」に10点、「90点以上」に10点が設定される。 Furthermore, a score based on each user's past scoring results is set in the analysis data. For example, the user “... Since 1 “song number: 124504” has a scoring result of 95 points or more, 10 points are set to “95 points or more” and 10 points are set to “90 points or more”.
 なお、図4においては「ジャンル」の欄が空白になっているが、「ジャンル」の欄は、ユーザ全員の年齢層が近い場合に限り配点を行う。例えば全員が20歳代である場合は、J-POPおよびロックのジャンルについて10点を設定する。 In FIG. 4, the “genre” column is blank, but the “genre” column is assigned only when the age groups of all users are close. For example, if everyone is in their 20s, 10 points are set for the J-POP and rock genres.
 CPU11は、メドレー生成の指示を受け付けると、以上のような解析データを生成し、各楽曲について設定した得点を合計する。そして、CPU11は、合計得点に基づいて、予約曲の演奏順の入れ替えを行う。合計得点が高い曲ほど盛り上がる曲であると考えられる。曲順は、残り時間が少なくなるほど盛り上がるようにするため、合計得点が低い順とすることが望ましい。ただし、最初の曲は盛り上げたいため、2番目に得点の高い曲が先頭になるように並び替えることが望ましい。例えば、図4の例では、2番目に得点の高いNo.2(101点)が先頭の曲になり、最も点数の低いNo.4(22点)が2曲目になる、続けて、No.1(42点)、No.3(78点)、No.6(88点)と並び、最も得点の高いNo.5(111点)が最後の曲になる。 When the CPU 11 receives an instruction to generate a medley, the CPU 11 generates the analysis data as described above and totals the scores set for each piece of music. Then, the CPU 11 changes the performance order of the reserved songs based on the total score. It is considered that the song with the higher total score is more exciting. It is desirable that the order of the music is in the order of the lower total score so that the remaining time decreases, the higher the score. However, it is desirable to rearrange the music so that the music with the second highest score is at the top, because the first music is to be excited. For example, in the example of FIG. No. 2 (101 points) is the first song and No. with the lowest score. No. 4 (22 points) is the second song. 1 (42 points), No. 1 3 (78 points), no. 6 (88 points), No. with the highest score. 5 (111 points) is the last song.
 なお、曲順は、この例に限らず、例えば、同じジャンルの曲は続かないようにする、同じユーザは続かないようにする、同じ性別のユーザは続かないようにする、等を考慮してもよい。 Note that the order of the songs is not limited to this example. For example, the same genre songs should not be continued, the same users should not be continued, the same gender users should not be continued, etc. Also good.
 また、合計得点が高い順に並べ替えてもよいし、最後に最も合計得点が低い曲を配置して、次に合計得点が低い曲を先頭に配置してもよい。 Also, it may be rearranged in descending order of the total score, or the song with the lowest total score may be arranged at the end, and the song with the next lowest total score may be arranged at the top.
 次に、CPU11は、並べ替えた各楽曲の一部を抽出して、残り時間内に全ての予約曲が演奏可能なように編集を行う。まず、CPU11は、各楽曲について演奏区間毎の抽出順を決定する。 Next, the CPU 11 extracts a part of the rearranged music pieces and edits so that all reserved songs can be played within the remaining time. First, the CPU 11 determines the extraction order for each performance section for each piece of music.
 図7(A)、図7(B)および図7(C)は、抽出順を示す図である。抽出順は、各楽曲が先頭曲であるか、最終曲であるか、その他の曲であるか、によって異なる。図7(A)は、先頭曲の抽出順を示す図である。先頭曲では、1コーラス目のサビが抽出順1位になり、当該サビを含む1コーラス目が抽出順2位になっている。図7(B)は、最終曲の抽出順を示す図である。最終曲では、大サビが抽出順1位になり、当該大サビを含むコーラスが抽出順2位になっている。また、先頭曲では、前奏が抽出順3位になり、後奏は抽出しない(削除される)ようになっているのに対し、最終曲では、後奏が抽出順3位になり、前奏は抽出しないようになっている。図7(C)は、その他の曲の抽出順である。その他の曲は、先頭曲と同じ抽出順であるが、前奏および後奏は抽出しないようになっている。 7A, 7B, and 7C are diagrams showing the order of extraction. The extraction order differs depending on whether each song is the first song, the last song, or another song. FIG. 7A is a diagram showing the extraction order of the first song. In the first song, the chorus of the first chorus is first in the extraction order, and the first chorus including the chorus is second in the extraction order. FIG. 7B is a diagram showing the extraction order of the final song. In the final song, chorus including the chorus is ranked first in the order of extraction, and chorus including the chorus is ranked second in the order of extraction. In the first song, the prelude is ranked third in the order of extraction, and the latter is not extracted (deleted), whereas in the last song, the prelude is ranked third in the order of extraction. Do not extract. FIG. 7C shows the extraction order of other songs. Other songs are in the same extraction order as the first song, but the prelude and the follower are not extracted.
 CPU11は、以上のような抽出順で各曲の演奏区間(構成区間)を抽出し、残り時間内に全ての予約曲を演奏できるように編集を行う。図3(B)および図3(C)に示すように、各楽曲には、演奏区間毎に開始時刻、終了時刻、および区間長が予め設定されているため、CPU11は、これら区間長の合計と、カラオケシステムを利用可能な残り時間を比較することで、残り時間内に収まるか否かを判断することができる。なお、残り時間は、例えばユーザがタッチパネル15、操作部25、またはリモコン9を用いて手動で設定する、あるいは、カラオケシステムの利用開始時に、店舗側が利用予定時間に合わせて設定してもよい。 The CPU 11 extracts the performance sections (composition sections) of each song in the order of extraction as described above, and edits so that all reserved songs can be played within the remaining time. As shown in FIG. 3 (B) and FIG. 3 (C), the start time, the end time, and the section length are preset for each performance section in each musical piece. By comparing the remaining time in which the karaoke system can be used, it can be determined whether or not the remaining time can be accommodated. Note that the remaining time may be manually set by the user using the touch panel 15, the operation unit 25, or the remote control 9, for example, or may be set by the store in accordance with the scheduled use time when starting to use the karaoke system.
 CPU11は、まず各曲の抽出順1位の演奏区間を抽出し、残り時間内に収まるか否かを判断する。CPU11は、残り時間内に収まると判断した場合は、各曲の抽出順2位の演奏区間を抽出する。以後は、抽出順に従って、演奏区間の抽出処理を繰り返す。ただし、抽出順2位以下の演奏区間については、全ての曲ではなく、上記合計得点に応じた順に従って1曲ずつ抽出を行い、残り時間内に収まるか否かを判断する。なお、合計得点に応じた順とは、合計得点の高い順そのものであってもよいが、図8に示すように、ユーザ毎に合計得点順に並べ、さらに歌唱者の得点が高い順に並べるようにしてもよい。例えば、図8では、ユーザ「・・・B1」の歌唱者としての得点が最も高く、ユーザ「・・・CC」の歌唱者としての得点が最も低いため、合計得点に応じた順として、No.5→No.1→No.6→No.2→No.4→No.3となる。したがって、まずNo.5の楽曲について抽出順2位の演奏区間を抽出する。No.5の楽曲は、上述したように最終曲であるため、大サビを含むコーラスの演奏区間が抽出される。ここで、CPU11は、まだ残り時間内に収まると判断した場合に、次にNo.1の楽曲について抽出順2位の演奏区間を抽出する。No.2の楽曲は、上述したように先頭曲であるため、1コーラス目の演奏区間が抽出される。このようにして、各楽曲について重要な演奏区間から徐々に抽出されることになり、残り時間を超えた時点で抽出が終了する。 The CPU 11 first extracts the performance section that is ranked first in the extraction order of each song, and determines whether it falls within the remaining time. If the CPU 11 determines that it falls within the remaining time, it extracts the second performance section in the extraction order of each song. Thereafter, the performance section extraction process is repeated according to the extraction order. However, with respect to the performance section of the second or lower extraction order, not all songs but one song is extracted according to the order according to the total score, and it is determined whether or not it falls within the remaining time. Note that the order according to the total score may be the order with the highest total score, but as shown in FIG. 8, the users are arranged in the order of the total score for each user, and further arranged in the order of the highest score of the singer. May be. For example, in FIG. 8, since the score as a singer of the user “... B1” is the highest and the score as a singer of the user “... CC” is the lowest, the order according to the total score is No. . 5 → No. 1 → No. 6 → No. 2 → No. 4 → No. 3 Therefore, first, no. The performance section of the second extraction order is extracted for five songs. No. Since the music No. 5 is the final music as described above, a chorus performance section including a large chorus is extracted. Here, if the CPU 11 determines that the time is still within the remaining time, the next No. The second performance section is extracted for one piece of music. No. Since the second piece of music is the first piece as described above, the performance section of the first chorus is extracted. In this way, each musical piece is gradually extracted from the important performance section, and the extraction ends when the remaining time is exceeded.
 なお、抽出順1位の演奏区間(サビ)を抽出した時点で、残り時間を大幅に超えてしまう場合には、サビの中からさらに特定の部分を抽出する処理を行う。特定の部分は、図3(C)に示した様なデータベースとして予め各楽曲データに設定されている場合に抽出することができる。また、特定の部分を抽出した場合であっても残り時間を大幅に超えてしまう場合には、優先度が低い楽曲から削除する(演奏しない)ようにしてもよい。また、逆に、全楽曲について全ての演奏区間を抽出しても残り時間より大幅に短い場合には、同じ演奏区間を繰り返し用いる、あるいは、定番の楽曲を挿入する、等の態様も可能である。楽曲を挿入する場合には、その楽曲の優先度も考慮して演奏順の決定を行い、抽出を行うことが好ましい。 In addition, when the remaining time is greatly exceeded when the first performance section (rust) in the extraction order is extracted, a specific part is extracted from the rust. The specific portion can be extracted when each piece of music data is set in advance as a database as shown in FIG. Further, even when a specific portion is extracted, if the remaining time is greatly exceeded, it may be deleted (not played) from the music with low priority. On the other hand, if all performance sections are extracted for all the music pieces and are significantly shorter than the remaining time, the same performance section may be used repeatedly or a standard music piece may be inserted. . When inserting a music, it is preferable to determine the order of performance in consideration of the priority of the music and perform extraction.
 次に、CPU11は、抽出した演奏区間を滑らかにつなげる連結処理を行う。連結処理は、例えば接合、クロスフェード、またはブリッジの3種類の方式が適用可能である。接合方式は、ある演奏区間の終了タイミングに同期して次の演奏区間をスタートするという方式である。この方式は、前後の演奏区間の音量・テンポ・調性(キー)等が全て一致している場合に可能な方式である。例えば、同一曲の1コーラス目と3コーラス目とを連結する場合である。 Next, the CPU 11 performs a connection process for smoothly connecting the extracted performance sections. For the connection process, for example, three types of methods such as joining, cross-fading, and bridge are applicable. The joining method is a method in which the next performance section is started in synchronization with the end timing of a certain performance section. This method is possible when the volume, tempo, tonality (key), etc. of the preceding and following performance sections all match. For example, the first chorus and the third chorus of the same song are connected.
 クロスフェードは、前の演奏区間および次の演奏区間を並行して重ね合わせて演奏する方式である。このとき、前の演奏区間の音量を徐々に低下させ、次の演奏区間の音量を徐々に上昇させる。なお、この場合、音量のみならず、演奏テンポも前の演奏区間のテンポから次の演奏区間のテンポへ徐々に移行するようにすることにより、より滑らかな連結を行うことも可能である。クロスフェードを行うと、図9に示すように、重ね合わせた部分だけ、全体の演奏時間が短くなることになる。したがって、全楽曲の合計演奏時間が残り時間よりも長い場合、クロスフェードを行う。 Crossfade is a method of performing by overlapping the previous performance section and the next performance section in parallel. At this time, the volume of the previous performance section is gradually decreased, and the volume of the next performance section is gradually increased. In this case, not only the volume but also the performance tempo can be gradually changed from the tempo of the previous performance section to the tempo of the next performance section, so that smoother connection can be performed. When crossfading is performed, as shown in FIG. 9, only the overlapped portion shortens the overall performance time. Therefore, when the total performance time of all the music pieces is longer than the remaining time, crossfading is performed.
 次に、ブリッジ方式とは、前の演奏区間と次の演奏区間の間に、フレーズ(ブリッジ部)を挿入する方式である。ブリッジ部を挿入すると、全体の演奏時間が長くなることになる。したがって、全楽曲の合計演奏時間が残り時間よりも短い場合、ブリッジ部を挿入する。ブリッジ部は、前の演奏区間および次の演奏区間のリズムや和音に基づいて自動生成される。例えば、前の演奏区間の音量から次の演奏区間の音量へ徐々に移行し、前の演奏区間のテンポから次の演奏区間のテンポへ徐々に移行するドラム音等を生成する。また、前の演奏区間と次の演奏区間で拍子が異なる場合には、拍子感を無くすような音符(たとえば、シンコペーションや2分3連音符など)や休符を挿入して、拍子を変更する。また、前の演奏区間と次の演奏区間でキー(調性)が異なる場合には、和音進行で転調する。 Next, the bridge system is a system in which a phrase (bridge part) is inserted between the previous performance section and the next performance section. If the bridge portion is inserted, the overall performance time is lengthened. Therefore, when the total performance time of all the music pieces is shorter than the remaining time, a bridge part is inserted. The bridge section is automatically generated based on the rhythm and chords of the previous performance section and the next performance section. For example, a drum sound that gradually shifts from the volume of the previous performance section to the volume of the next performance section and gradually shifts from the tempo of the previous performance section to the tempo of the next performance section is generated. Also, if the time signature is different between the previous performance section and the next performance section, insert a note (such as a syncopation or a two-thirds triplet) or a rest that eliminates the sense of time, and change the time signature. . If the key (tonality) is different between the previous performance section and the next performance section, the key is transposed as the chord progresses.
 さらに、CPU11は、残り時間に合わせて各楽曲のテンポを調整することで、クロスフェードやブリッジ部の挿入による演奏時間の調整に加えて、あるいは、当該クロスフェードやブリッジ部の挿入による演奏時間の調整に代えて、残り時間ちょうどに演奏が終了するようなメドレーを生成することが望ましい。テンポを上げると図9に示すように演奏時間が短くなり、逆に、テンポを下げると演奏時間が長くなる。 Further, the CPU 11 adjusts the tempo of each piece of music in accordance with the remaining time, so that in addition to the adjustment of the performance time by inserting the crossfade or the bridge portion, or the performance time by the insertion of the crossfade or the bridge portion. Instead of adjustment, it is desirable to generate a medley that finishes playing at the exact remaining time. Increasing the tempo shortens the performance time as shown in FIG. 9, and conversely, decreasing the tempo increases the performance time.
 テンポ変更は、1曲ずつ1段階ずつ変更することが望ましい。例えば、図10に示すように、No.5→No.1→No.6→No.2→No.4→No.3の順にテンポ変更を行う。この順は、図8で示した合計得点に応じた順序に対応する。 It is desirable to change the tempo one step at a time. For example, as shown in FIG. 5 → No. 1 → No. 6 → No. 2 → No. 4 → No. Change the tempo in the order of 3. This order corresponds to the order corresponding to the total score shown in FIG.
 図10の例では、合計演奏時間が480000msec、すなわち8分ちょうどで演奏が終了するようにテンポ変更を行う例である。調整前の合計演奏時間は、507164msecであり、目標の演奏時間に対して27164msec長くなっている。No.5→No.1→No.6→No.2→No.4→No.3の順に1曲ずつテンポ変更を行うと、No.3のテンポ変更を行った時点で合計演奏時間が478456msecとなり、目標の演奏時間より短くなる。この合計演奏時間は、目標の演奏時間よりも所定時間(例えば1000msec)以上短くなるため、No.3の曲のテンポ変更は行わないものとする。ただし、No.4のテンポ変更までの場合、合計演奏時間は、482826msecであり、目標の演奏時間よりも所定時間(例えば1000msec)以上長くなっている。したがって、CPU11は、No.4のテンポ変更を行った後、最も演奏時間の短い曲から順にテンポ変更を行う。図10の例では、No.1の曲が最も演奏時間が短いため、No.1の曲のテンポ変更を行う。その結果、合計演奏時間は、480802msecとなり、目標の演奏時間とほぼ一致する(±1000msec未満となる)。 In the example of FIG. 10, the tempo is changed so that the performance ends when the total performance time is 480000 msec, that is, exactly 8 minutes. The total performance time before adjustment is 507164 msec, which is 27164 msec longer than the target performance time. No. 5 → No. 1 → No. 6 → No. 2 → No. 4 → No. If you change the tempo one song at a time in the order of No. 3, no. When the tempo change of 3 is performed, the total performance time becomes 478456 msec, which is shorter than the target performance time. The total performance time is shorter than the target performance time by a predetermined time (for example, 1000 msec). It is assumed that the tempo of song 3 is not changed. However, no. When the tempo is changed to 4, the total performance time is 482826 msec, which is longer than the target performance time by a predetermined time (for example, 1000 msec). Therefore, the CPU 11 After changing the tempo of 4, the tempo is changed in order from the song with the shortest performance time. In the example of FIG. No. 1 has the shortest performance time, so no. Change the tempo of song 1. As a result, the total performance time is 480802 msec, which substantially matches the target performance time (below ± 1000 msec).
 CPU11は、以上のようにしてテンポ変更を行い、残り時間ちょうどに演奏が終了するようなメドレーを生成する。なお、例えば役職の高いユーザが歌唱する曲はテンポを変更しない等、ユーザ情報に応じて各曲のテンポ変更をするか否かを決定してもよい。 The CPU 11 changes the tempo as described above, and generates a medley in which the performance ends just after the remaining time. Note that, for example, a song sung by a user with a high job title may decide whether or not to change the tempo of each song according to user information, such as not changing the tempo.
 図14に示されるように、本発明の楽曲編集方法によれば、楽曲(楽曲データ)の再生予約(選択)を受け付ける(s1)。そして、予約された楽曲に基づき、メドレーの生成(楽曲データの編集)を行う(s2)。
 次に、カラオケ装置のメドレーの生成、演奏の全体動作を、フローチャートを参照しながら説明する。図11に示すように、CPU11は、タッチパネル15、操作部25、またはリモコン9を介してメドレー生成の指示を受け付けると(s11)、まずRAM12の予約リストに登録された予約曲の曲番号を読み出す(s12)。そして、CPU11は、ユーザ情報を取得する(s13)とともに、現在時刻を取得し、残り時間を算出する(s14)。残り時間は、ユーザが直接入力してもよいし、ホスト5からダウンロードしてもよいし、当該ホスト5から終了時刻を取得して、CPU11が自動で算出するようにしてもよい。
As shown in FIG. 14, according to the music editing method of the present invention, a reproduction reservation (selection) of music (music data) is accepted (s1). Based on the reserved music, medley is generated (music data is edited) (s2).
Next, the overall operation of karaoke device medley generation and performance will be described with reference to a flowchart. As shown in FIG. 11, when the CPU 11 receives an instruction to generate a medley via the touch panel 15, the operation unit 25, or the remote controller 9 (s <b> 11), first, the CPU 11 reads the song number of the reserved song registered in the reserved list in the RAM 12. (S12). Then, the CPU 11 acquires user information (s13), acquires the current time, and calculates the remaining time (s14). The remaining time may be directly input by the user, may be downloaded from the host 5, or may be calculated automatically by the CPU 11 by obtaining the end time from the host 5.
 なお、ホスト5から終了時刻を取得する場合には、現在時刻が終了時刻に近づいた(例えば残り10分となった)時点で、モニタ24に残り時間を表示するとともに、メドレー化を勧める表示を行い、ユーザにメドレー生成の指示を促すようにしてもよい。 When the end time is acquired from the host 5, when the current time approaches the end time (for example, when the remaining time is 10 minutes), the remaining time is displayed on the monitor 24 and a display for recommending medley is displayed. To prompt the user to generate a medley.
 また、ユーザが予めメドレー化したい残り時間を設定しておき、実際の残り時間が当該設定された残り時間になった時点で、CPU11がs12移行の処理を自動実行するようにしてもよい。 Alternatively, the remaining time that the user wants to medley may be set in advance, and when the actual remaining time reaches the set remaining time, the CPU 11 may automatically execute the process of shifting to s12.
 そして、CPU11は、メドレー生成処理を行う(s15)。メドレー生成処理は、上述したように、図4に示した解析データの生成と、当該解析データに基づく曲順の設定と、図7(A)、図7(B)および図7(C)に示した抽出順に基づいた各曲の演奏区間の抽出処理と、各演奏区間の連結処理と、テンポ調整による微調整と、からなる。 Then, the CPU 11 performs a medley generation process (s15). As described above, the medley generation process includes the generation of the analysis data shown in FIG. 4, the setting of the music order based on the analysis data, and FIGS. 7A, 7B, and 7C. It consists of a performance section extraction process for each song based on the extraction order shown, a connection process for each performance section, and a fine adjustment by tempo adjustment.
 CPU11は、メドレー生成処理を終了すると、現在の予約曲を全てキャンセルし(s16)、現在演奏中の曲を終了する(s17)。なお、現在演奏中の曲は、途中で終了せずに最後まで演奏を行ってもよい。この場合、現在演奏中の曲が終了してからの残り時間に応じたメドレーを生成する。また、現在演奏中の曲もメドレーの一部としてユーザに認識させるために、現在の曲をフェードアウトさせるとともにメドレーをフェードインさせるクロスフェード処理を行ってもよい。 When the CPU 11 finishes the medley generation process, the CPU 11 cancels all the current reserved songs (s16) and ends the currently playing song (s17). Note that the currently playing song may be played to the end without ending in the middle. In this case, a medley is generated according to the remaining time after the currently played song ends. In addition, in order for the user to recognize the currently playing song as a part of the medley, the current song may be faded out and the medley may be faded in.
 その後、CPU11は、メドレーを次の演奏曲として予約リストに登録し(s18)、当該メドレーの演奏を行う(s19)。このとき、モニタ24にメドレーの演奏を開始する旨の表示を行ってもよい。 Thereafter, the CPU 11 registers the medley in the reservation list as the next performance music (s18) and performs the medley (s19). At this time, a display to start the performance of the medley may be displayed on the monitor 24.
 以上のようにして、カラオケ装置がメドレーの生成および演奏を行うことにより、ユーザは、予約された複数の楽曲を、残り時間内で盛り上がりながら歌唱することができる。 As described above, the karaoke device generates and plays a medley, so that the user can sing a plurality of reserved songs while being excited within the remaining time.
 なお、本実施形態においては、本発明の楽曲編集装置を備えたカラオケ装置の例を示したが、他の一般的な楽曲データ(MP3等のオーディオデータを含む。)を再生する楽音再生装置においてもメドレーの生成を行うことが可能であるし、PCやスマートフォン等の一般的な情報処理装置を用いて本発明の楽曲編集装置を実現することも可能である。 In the present embodiment, an example of a karaoke apparatus provided with the music editing apparatus of the present invention has been shown. However, in a musical sound reproducing apparatus for reproducing other general music data (including audio data such as MP3). The medley can be generated, and the music editing apparatus of the present invention can be realized using a general information processing apparatus such as a PC or a smartphone.
 図12を参照して、楽音再生装置の一例である携帯オーディオプレーヤについて説明する。図12において、図2のカラオケ装置7と共通する構成は同一の符号を付し、その説明を省略する。携帯オーディオプレーヤは、CPU51、RAM12、ROM53、操作部55、ネットワークI/F14、SS20、およびスピーカ21を備えている。ROM53は、楽曲データ(ここでは、MP3等のオーディオデータ)を記憶する。操作部55は、楽曲データの再生予約を受け付ける。また、ユーザが操作部55を用いてメドレー生成の指示を行うと、CPU51は、メドレー生成処理を行う。これにより、本発明の受付手段および編集手段を実現する。すなわち、図中破線に示すように、この例では、CPU51、RAM12、および操作部55により、本発明の楽曲編集装置が実現される。 With reference to FIG. 12, a portable audio player which is an example of a musical sound reproducing device will be described. In FIG. 12, the same components as those in the karaoke apparatus 7 in FIG. The portable audio player includes a CPU 51, a RAM 12, a ROM 53, an operation unit 55, a network I / F 14, SS 20, and a speaker 21. The ROM 53 stores music data (here, audio data such as MP3). The operation unit 55 accepts a music data reproduction reservation. When the user gives an instruction to generate a medley using the operation unit 55, the CPU 51 performs a medley generation process. Thereby, the receiving means and editing means of the present invention are realized. That is, as shown by a broken line in the figure, in this example, the music editing apparatus of the present invention is realized by the CPU 51, the RAM 12, and the operation unit 55.
 このような楽音再生装置においてMP3等のオーディオデータを再生する場合、当該オーディオデータとは別に優先度、抽出順、演奏区間、等の情報を用意する(例えばセンタ1から都度ダウンロードし、RAM12に記憶する)。 When audio data such as MP3 is reproduced in such a musical sound reproducing apparatus, information such as priority, extraction order, performance section, etc. is prepared separately from the audio data (for example, downloaded from the center 1 each time and stored in the RAM 12). To do).
 例えば目的地に向かう電車や自動車の中で、楽音再生装置である携帯オーディオプレーヤ、スマートフォン、ナビゲーションシステム等を用いて曲を聴く場合に、これら携帯オーディオプレーヤ、スマートフォン、ナビゲーションシステム等が、目的地への到着予定時刻に合わせて盛り上がる順に楽曲を編集することが可能である。特に、現時点の位置に応じて曲順を変更する態様も可能である。例えば、現在の位置に該当する地域において、図5に示した地域別ランキングに登録された曲がある場合、当該地域別ランキングに登録された曲の得点を高くする。なお、自装置の位置は、GPS等を用いて検出してもよいし、出発地、出発時刻、目的地、および出発地から目的地までのルート、等から現在の時刻に対応する位置を推定することも可能である。 For example, when listening to a song using a portable audio player, a smartphone, a navigation system, or the like, which is a musical sound reproduction device, on a train or car heading to the destination, the portable audio player, the smartphone, the navigation system, etc. It is possible to edit the music in order of excitement according to the scheduled arrival time. In particular, it is possible to change the order of the songs according to the current position. For example, if there is a song registered in the regional ranking shown in FIG. 5 in the region corresponding to the current position, the score of the song registered in the regional ranking is increased. The position of the device itself may be detected using GPS or the like, or the position corresponding to the current time is estimated from the departure place, departure time, destination, and route from the departure place to the destination. It is also possible to do.
 なお、上記の例では、自装置(カラオケ装置7のHDD13または携帯オーディープレーヤのROM53)に楽曲データを記憶し、自装置に記憶している楽曲データの中から複数の楽曲データの選択を受け付ける例を示したが、必ずしも楽曲データを自装置に記憶している必要はない。例えば、受け付けた選択曲に対応するデータベース(図3(C)や図4に示したもの)をセンタ1からダウンロードし、各楽曲について演奏区間の抽出処理を行った後、対応する演奏区間の楽曲データだけをセンタ1からダウンロードして、連結処理およびテンポ調整による微調整を実行することで、メドレーを生成することも可能である。なお、楽曲データは、必要なデータを全てダウンロードしてから連結処理やテンポ調整を行ってもよいが、必要なデータをストリーミングしながら、都度、連結処理やテンポ調整を行う態様としてもよい。 In the above example, the song data is stored in the own device (the HDD 13 of the karaoke device 7 or the ROM 53 of the portable audio player), and selection of a plurality of song data is received from the song data stored in the own device. However, it is not always necessary to store the music data in its own device. For example, after downloading a database (shown in FIG. 3 (C) or FIG. 4) corresponding to the received selected music from the center 1 and performing performance section extraction processing for each music, the music in the corresponding performance section It is also possible to generate a medley by downloading only the data from the center 1 and executing a fine adjustment by a connection process and a tempo adjustment. Note that the music data may be subjected to connection processing and tempo adjustment after all necessary data has been downloaded, but may be configured to perform connection processing and tempo adjustment each time while streaming the necessary data.
 本出願は、2013年3月15日出願の日本特許出願(特願2013-052984)に基づくものであり、その内容はここに参照として取り込まれる。 This application is based on a Japanese patent application filed on Mar. 15, 2013 (Japanese Patent Application No. 2013-052984), the contents of which are incorporated herein by reference.
本発明によれば、選択された複数の楽曲を、残り時間内で盛り上がる順に編集可能である。 According to the present invention, it is possible to edit a plurality of selected music pieces in order of excitement within the remaining time.
1…センタ 2…ネットワーク 3…カラオケ店舗 5…ホスト 7…カラオケ装置 9…リモコン 11…CPU 12…RAM 13…HDD 15…タッチパネル 16…マイク 17…A/Dコンバータ 18…音源 19…ミキサ 20…サウンドシステム 21…スピーカ 22…デコーダ 23…表示処理部 24…モニタ 25…操作部 26…送受信部 1 ... center 2 ... network 3 ... karaoke store 5 ... host 7 ... karaoke device 9 ... remote control 11 ... CPU 12 ... RAM 13 ... HDD 15 ... touch panel 16 ... microphone 17 ... A / D converter 18 ... sound source 19 ... mixer 20 ... sound System 21 ... Speaker 22 ... Decoder 23 ... Display processing part 24 ... Monitor 25 ... Operation part 26 ... Transmission / reception part

Claims (8)

  1.  複数の楽曲データの選択を受け付ける受付手段と、
     各楽曲データの優先度に基づいて、前記受付手段が受け付けた選択された楽曲データに対応する選択曲の順序を並べ替えるとともに、所定の制限時間に応じて各楽曲データの編集を行う編集手段を備えたことを特徴とする楽曲編集装置。
    Accepting means for accepting selection of a plurality of music data;
    Editing means for rearranging the order of selected songs corresponding to the selected song data received by the receiving means based on the priority of each song data and editing each song data according to a predetermined time limit A music editing apparatus characterized by comprising.
  2.  前記優先度は、各楽曲データの再生履歴に応じた情報が含まれていることを特徴とする請求項1に記載の楽曲編集装置。 The music editing apparatus according to claim 1, wherein the priority includes information corresponding to a reproduction history of each music data.
  3.  ユーザ情報を記憶するユーザ情報記憶手段を備え、
     前記編集手段は、前記優先度に加えて、前記ユーザ情報に基づいて前記選択曲の順序を並べ替えるとともに、前記所定の制限時間に応じて各楽曲データの編集を行うことを特徴とする請求項1または請求項2に記載の楽曲編集装置。
    User information storage means for storing user information;
    The editing means rearranges the order of the selected songs based on the user information in addition to the priority, and edits each song data according to the predetermined time limit. The music editing apparatus according to claim 1 or 2.
  4.  各楽曲データは、構成区間を示す情報が含まれ、
     前記編集手段は、並べ替えた選択曲の順序に応じて、各構成区間に抽出順を設定し、当該抽出順に応じて各選択曲から所定の構成区間を抽出することを特徴とする請求項1乃至請求項3のいずれかに記載の楽曲編集装置。
    Each piece of music data includes information indicating the constituent sections,
    2. The editing unit according to claim 1, wherein an extraction order is set for each constituent section according to the order of the rearranged selected music pieces, and a predetermined constituent section is extracted from each selected music piece according to the extraction order. The music editing device according to claim 3.
  5.  前記編集手段は、前記楽曲データのテンポを変更することにより、編集後の楽曲データの長さが前記所定の制限時間に対応するように編集を行うことを特徴とする請求項1乃至請求項4のいずれかに記載の楽曲編集装置。 5. The editing unit according to claim 1, wherein the editing unit performs editing so that a length of the edited music data corresponds to the predetermined time limit by changing a tempo of the music data. The music editing device according to any one of the above.
  6.  前記編集手段は、前記受付手段が受け付けた楽曲データ以外の楽曲データを挿入することを特徴とする請求項1乃至請求項5のいずれかに記載の楽曲編集装置。 6. The music editing apparatus according to claim 1, wherein the editing means inserts music data other than the music data received by the receiving means.
  7.  請求項1乃至請求項6のいずれかに記載の楽曲編集装置と、サーバと、からなる楽曲編集システムであって、
     前記編集手段は、前記サーバから前記優先度に係る情報を取得し、当該取得した優先度に係る情報に基づいて前記編集を行うことを特徴とする楽曲編集システム。
    A music editing system comprising the music editing device according to any one of claims 1 to 6 and a server,
    The music editing system, wherein the editing unit acquires information related to the priority from the server, and performs the editing based on the acquired information related to the priority.
  8.  複数の楽曲データの選択を受け付け、
     各楽曲データの優先度に基づいて、前記選択された楽曲データに対応する選択曲の順序を並べ替えるとともに、所定の制限時間に応じて各楽曲データの編集を行うことを特徴とする楽曲編集方法。
    Accept selection of multiple music data,
    A music editing method comprising rearranging the order of selected music corresponding to the selected music data based on the priority of each music data, and editing each music data according to a predetermined time limit .
PCT/JP2014/056806 2013-03-15 2014-03-13 Song editing device and song editing system WO2014142288A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013052984A JP2014178535A (en) 2013-03-15 2013-03-15 Music editing device, karaoke device, and music editing system
JP2013-052984 2013-03-15

Publications (1)

Publication Number Publication Date
WO2014142288A1 true WO2014142288A1 (en) 2014-09-18

Family

ID=51536936

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/056806 WO2014142288A1 (en) 2013-03-15 2014-03-13 Song editing device and song editing system

Country Status (2)

Country Link
JP (1) JP2014178535A (en)
WO (1) WO2014142288A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113611267A (en) * 2021-08-17 2021-11-05 网易(杭州)网络有限公司 Word and song processing method and device, computer readable storage medium and computer equipment

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6403120B2 (en) * 2015-11-11 2018-10-10 株式会社サンセイアールアンドディ Game machine
JP6403119B2 (en) * 2015-11-11 2018-10-10 株式会社サンセイアールアンドディ Game machine
JPWO2022230171A1 (en) * 2021-04-30 2022-11-03

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000148167A (en) * 1998-11-12 2000-05-26 Daiichikosho Co Ltd Karaoke device and communication karaoke system having characteristic in editing method of medley music
JP2001067085A (en) * 1999-08-26 2001-03-16 Nippon Columbia Co Ltd Karaoke device
JP2004205828A (en) * 2002-12-25 2004-07-22 Yamaha Corp Karaoke machine
JP2006010988A (en) * 2004-06-24 2006-01-12 Fujitsu Ltd Method, program, and device for optimizing karaoke music selection
JP2010156783A (en) * 2008-12-26 2010-07-15 Daiichikosho Co Ltd Karaoke performance system with tempo control function
JP2011197345A (en) * 2010-03-19 2011-10-06 Yamaha Corp Karaoke device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3607758B2 (en) * 1995-08-25 2005-01-05 ブラザー工業株式会社 Music player
JP2013190764A (en) * 2012-03-15 2013-09-26 Xing Inc Karaoke device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000148167A (en) * 1998-11-12 2000-05-26 Daiichikosho Co Ltd Karaoke device and communication karaoke system having characteristic in editing method of medley music
JP2001067085A (en) * 1999-08-26 2001-03-16 Nippon Columbia Co Ltd Karaoke device
JP2004205828A (en) * 2002-12-25 2004-07-22 Yamaha Corp Karaoke machine
JP2006010988A (en) * 2004-06-24 2006-01-12 Fujitsu Ltd Method, program, and device for optimizing karaoke music selection
JP2010156783A (en) * 2008-12-26 2010-07-15 Daiichikosho Co Ltd Karaoke performance system with tempo control function
JP2011197345A (en) * 2010-03-19 2011-10-06 Yamaha Corp Karaoke device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113611267A (en) * 2021-08-17 2021-11-05 网易(杭州)网络有限公司 Word and song processing method and device, computer readable storage medium and computer equipment

Also Published As

Publication number Publication date
JP2014178535A (en) 2014-09-25

Similar Documents

Publication Publication Date Title
JPH06124094A (en) Karaoke @(3754/24)accompaniment of recorded music) device
JP6452229B2 (en) Karaoke sound effect setting system
WO2014142288A1 (en) Song editing device and song editing system
WO2014181773A1 (en) Music session management method and music session management device
JP5544961B2 (en) server
WO2011111825A1 (en) Karaoke system and karaoke performance terminal
JP2009134013A (en) Karaoke device capable of performing karaoke song selection and reservation based on musical composition group in personal medium
JP6316099B2 (en) Karaoke equipment
JP6894766B2 (en) Karaoke equipment
JP5234950B2 (en) Singing recording system
JP4182782B2 (en) Karaoke equipment
JP5439994B2 (en) Data collection / delivery system, online karaoke system
JP3941616B2 (en) Distribution method of online karaoke system
JP2010156783A (en) Karaoke performance system with tempo control function
JP5551983B2 (en) Karaoke performance control system
JP2023071043A (en) Karaoke system, guide voice control method and program
JPH06110479A (en) Karaoke recorded accompaniment device
JP6611633B2 (en) Karaoke system server
JP4116468B2 (en) Karaoke equipment
JP6196571B2 (en) Performance device and program
JP6057079B2 (en) Karaoke device and karaoke program
JP2000181469A (en) Karaoke device characterized by function of reproducing bgm in nonuse period
JP5500214B2 (en) Music score display output device and music score display output program
JP6594045B2 (en) Karaoke equipment
JP2003015657A (en) Music studio system of editing music software in accordance with singing voice of karaoke singer recorded in karaoke store and opening the same to the public over the internet

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14764336

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14764336

Country of ref document: EP

Kind code of ref document: A1