WO2022113907A1 - 音楽要素生成支援装置、音楽要素学習装置、音楽要素生成支援方法、音楽要素学習方法、音楽要素生成支援プログラムおよび音楽要素学習プログラム - Google Patents
音楽要素生成支援装置、音楽要素学習装置、音楽要素生成支援方法、音楽要素学習方法、音楽要素生成支援プログラムおよび音楽要素学習プログラム Download PDFInfo
- Publication number
- WO2022113907A1 WO2022113907A1 PCT/JP2021/042636 JP2021042636W WO2022113907A1 WO 2022113907 A1 WO2022113907 A1 WO 2022113907A1 JP 2021042636 W JP2021042636 W JP 2021042636W WO 2022113907 A1 WO2022113907 A1 WO 2022113907A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- music
- music element
- elements
- learning
- blank part
- Prior art date
Links
- 230000013016 learning Effects 0.000 title claims abstract description 101
- 238000000034 method Methods 0.000 title claims description 43
- 238000010276 construction Methods 0.000 claims abstract description 18
- 238000010801 machine learning Methods 0.000 claims abstract description 16
- 230000033764 rhythmic process Effects 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 15
- 230000006870 function Effects 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0008—Associated control or indicating means
- G10H1/0025—Automatic or semi-automatic music composition, e.g. producing random music, applying rules from music theory or modifying a musical piece
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10G—REPRESENTATION OF MUSIC; RECORDING MUSIC IN NOTATION FORM; ACCESSORIES FOR MUSIC OR MUSICAL INSTRUMENTS NOT OTHERWISE PROVIDED FOR, e.g. SUPPORTS
- G10G1/00—Means for the representation of music
- G10G1/04—Transposing; Transcribing
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/101—Music Composition or musical creation; Tools or processes therefor
- G10H2210/105—Composing aid, e.g. for supporting creation, edition or modification of a piece of music
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/101—Music Composition or musical creation; Tools or processes therefor
- G10H2210/151—Music Composition or musical creation; Tools or processes therefor using templates, i.e. incomplete musical sections, as a basis for composing
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2250/00—Aspects of algorithms or signal processing methods without intrinsic musical character, yet specifically adapted for or used in electrophonic musical processing
- G10H2250/311—Neural networks for electrophonic musical instruments or musical processing, e.g. for musical recognition or control, automatic composition or improvisation
Definitions
- the present invention relates to a music element generation support device, a music element learning device, a music element generation support method, a music element learning method, a music element generation support program, and a music element learning program that support the generation of music elements.
- An automatic composition device is known as a device that automatically creates a melody.
- motif melody is set for a plurality of positions in one song to be created.
- a melody of one song is generated by developing each of the set motif melody according to a template prepared in advance.
- the type of a predetermined phrase of the music is determined based on the first learned model. Further, based on the second trained model, one type of part is created from the determined phrase types. Further, using the third trained model, a part of one type is sequentially created from a part of another type. A musical piece is created by arranging a plurality of created parts in the order specified by a predetermined template.
- An object of the present invention is a music element generation support device, a music element learning device, a music element generation support method, a music element learning method, and a music element generation that can easily generate a music element that reflects the intention of the user. To provide support programs and music element learning programs.
- the music element generation support device includes a reception unit that includes a plurality of music elements arranged in chronological order and accepts a music element sequence including a blank portion of the music elements, and a reception unit that accepts a music element sequence and other music elements. It is provided with a generation unit that generates a blank music element based on a music element located behind the blank portion on the time axis in the music element sequence by using a learning model that generates the music element of the portion.
- a music element learning device has an acquisition unit that acquires a plurality of music element sequences including a plurality of music elements arranged in time series, and a blank portion in a part of each music element sequence at random.
- an acquisition unit acquires a plurality of music element sequences including a plurality of music elements arranged in time series, and a blank portion in a part of each music element sequence at random.
- the music element generation support method includes a step of accepting a music element sequence including a plurality of music elements arranged in time series and including a blank part of the music element, and a part of the music elements. It includes a step of generating a blank music element based on a music element located behind the blank on the time axis in a music element sequence using a learning model that generates other music elements.
- a step of acquiring a plurality of music element sequences including a plurality of music elements arranged in chronological order and a blank part in a part of each music element sequence are randomly selected. Learning to show the relationship between some music elements and the music elements in the blank part by machine learning the relationship between the music element other than the blank part and the music element in the blank part in the step set to Includes steps to build a model.
- the music element generation support program is a program that causes a computer to execute a music element generation support method, and includes a plurality of music elements arranged in chronological order and fills in blank parts of the music elements. Based on the music element located behind the blank part on the time axis in the music element string by using the process of accepting the music element string including and the learning model that generates the music element of the other part from one music element. Let the computer execute the process of generating the music element in the blank part.
- the music element learning program is a program that causes a computer to execute a music element learning method, and is a process of acquiring a plurality of music element sequences including a plurality of music elements arranged in chronological order. , By machine learning the relationship between the music elements other than the blank part and the music elements in the blank part in each music element string, and the process of randomly setting a blank part in a part of each music element string. Let the computer perform the process of constructing a learning model that shows the relationship between the music element and the music element in the blank area.
- FIG. 1 is a block diagram showing a configuration of a music element generation support system including a support device according to an embodiment of the present invention.
- FIG. 2 is a block diagram showing the configuration of the support device.
- FIG. 3 is a diagram for explaining the operation of the support device.
- FIG. 4 is a diagram for explaining the operation of the support device.
- FIG. 5 is a diagram for explaining the operation of the support device.
- FIG. 6 is a block diagram showing a configuration of a music element learning system including a learning device according to an embodiment of the present invention.
- FIG. 7 is a block diagram showing the configuration of the learning device.
- FIG. 8 is a diagram for explaining the operation of the learning device.
- FIG. 9 is a diagram for explaining the operation of the learning device.
- FIG. 10 is a flowchart showing an example of support processing by the support device of FIG.
- FIG. 11 is a flowchart showing an example of learning processing by the learning device of FIG. 7.
- the music element generation support device, the music element learning device, the music element generation support method, the music element learning method, the music element generation support program, and the music element learning program according to the embodiment of the present invention will be described in detail with reference to the drawings. do.
- the music element generation support device, the music element generation support method, and the music element generation support program are abbreviated as the support device, the support method, and the support program, respectively.
- the music element learning device, the music element learning method, and the music element learning program are abbreviated as the learning device, the learning method, and the learning program, respectively.
- FIG. 1 is a block diagram showing a configuration of a music element generation support system including a support device according to an embodiment of the present invention.
- the music element generation support system 100 (hereinafter abbreviated as the support system 100) includes a RAM (random access memory) 110, a ROM (read-only memory) 120, and a CPU (central processing unit) 130. , A storage unit 140, an operation unit 150, and a display unit 160.
- the support system 100 may be realized by an information processing device such as a personal computer, or may be realized by an electronic musical instrument having a performance function.
- the RAM 110, ROM 120, CPU 130, storage unit 140, operation unit 150, and display unit 160 are connected to the bus 170.
- the support device 10 is composed of the RAM 110, the ROM 120, and the CPU 130.
- the RAM 110 is made of, for example, a volatile memory and is used as a work area of the CPU 130 to temporarily store various data.
- the ROM 120 comprises, for example, a non-volatile memory and stores a support program.
- the CPU 130 performs music element generation support processing (hereinafter, abbreviated as support processing) by executing the support program stored in the ROM 120 on the RAM 110. The details of the support process will be described later.
- the storage unit 140 includes a storage medium such as a hard disk, an optical disk, a magnetic disk, or a memory card, and stores a learning model previously constructed by the learning device 20 of FIG. 7, which will be described later.
- the learning model is not in the storage unit 140 but on the server on the network (including the cloud server. The same applies to the servers referred to below). It may be stored in.
- the learning model shows the relationship between some music elements and the music elements in the blank part in the music element string including a plurality of music elements arranged in chronological order and including the blank part of the music element.
- the musical element sequence includes a melody, a chord progression, lyrics or a rhythm pattern. If the musical element sequence is a melody or rhythm pattern, the musical element is a note or rest. If the music element sequence is a chord progression, the music element is a chord. If the musical element sequence is lyrics, the musical element is a word.
- the storage unit 140 may store the support program instead of the ROM 120.
- the support program may be provided in a form stored in a computer-readable recording medium and installed in the ROM 120 or the storage unit 140. Further, when the support system 100 is connected to the network, the support program distributed from the server on the network may be installed in the ROM 120 or the storage unit 140.
- the operation unit 150 includes a pointing device such as a mouse or a keyboard, and is operated by the user to make a predetermined selection or designation.
- the display unit 160 includes, for example, a liquid crystal display, and displays the result of the support process.
- the operation unit 150 and the display unit 160 may be configured by a touch panel display.
- FIG. 2 is a block diagram showing a configuration of the support device 10.
- 3 to 5 are diagrams for explaining the operation of the support device 10.
- the music element sequence is a melody. Therefore, the musical element includes the pitch of the note and the length of the note or rest.
- the support device 10 includes a reception unit 11, a generation unit 12, a presentation unit 13, a selection unit 14, and a creation unit 15.
- the functions of the reception unit 11, the generation unit 12, the presentation unit 13, the selection unit 14, and the creation unit 15 are realized by the CPU 130 in FIG. 1 executing the support program.
- At least a part of the reception unit 11, the generation unit 12, the presentation unit 13, the selection unit 14, and the creation unit 15 may be realized by hardware such as an electronic circuit.
- the reception unit 11 receives a music element sequence that includes a plurality of music elements arranged in chronological order and includes a blank portion of the music elements.
- the music element string there may be one blank portion or a plurality of blank portions. Further, the music element in the blank portion may be one or a plurality of music elements.
- the user can input the music element string data indicating the music element string being produced to the reception unit 11.
- the music element string data may be produced using, for example, music production software.
- the musical element sequence is defined by a combination of the pitch or rest of the note and the time at which the note or rest is located.
- the music element sequence being produced contains a part of blank space where neither notes nor rests are specified.
- the generation unit 12 uses the blank portion based on the music element located behind the blank portion on the time axis in the music element sequence received by the reception unit 11. Generate multiple music elements that match. Further, the generation unit 12 evaluates the goodness of fit of each of the plurality of musical elements generated for the blank portion.
- the presentation unit 13 presents a predetermined number of music elements for the blank portion generated by the generation unit 12 in the order of the goodness of fit.
- a predetermined number of music elements for the blank portion generated by the generation unit 12 in the order of the goodness of fit.
- five generated music elements are displayed on the display unit 160 in the order of goodness of fit.
- the above-mentioned predetermined number is not limited to 5, and may be arbitrarily set by the user.
- the presentation unit 13 may present a music element having a goodness of fit higher than a predetermined goodness of fit among the musical elements generated by the generation unit 12.
- the predetermined goodness of fit may be arbitrarily set by the user.
- the selection unit 14 selects a designated music element from the plurality of music elements generated by the generation unit 12.
- the user can specify a desired music element among the music elements generated by the generation unit 12 by operating the operation unit 150 while referring to the music element and the goodness of fit presented by the presentation unit 13. can.
- the selection unit 14 may select the music element having the highest goodness of fit among the music elements generated by the generation unit 12. In this case, the support device 10 does not have to include the presentation unit 13.
- the creation unit 15 creates a music element string that does not include the blank portion, as shown in FIG. create.
- FIG. 6 is a block diagram showing a configuration of a music element learning system including a learning device according to an embodiment of the present invention.
- the music element learning system 200 (hereinafter, abbreviated as learning system 200) includes a RAM 210, a ROM 220, a CPU 230, a storage unit 240, an operation unit 250, and a display unit 260.
- the learning system 200 may be realized by an information processing device or an electronic musical instrument, similarly to the support system 100 of FIG. Alternatively, the learning system 200 and the support system 100 may be realized by the same hardware resources.
- the RAM 210, ROM 220, CPU 230, storage unit 240, operation unit 250 and display unit 260 are connected to the bus 270.
- the learning device 20 is composed of the RAM 210, the ROM 220, and the CPU 230.
- the RAM 210 is composed of, for example, a volatile memory, is used as a work area of the CPU 230, and temporarily stores various data.
- the ROM 220 comprises, for example, a non-volatile memory and stores a learning program.
- the CPU 230 performs music element learning processing (hereinafter, abbreviated as learning processing) by executing the learning program stored in the ROM 220 on the RAM 210. The details of the learning process will be described later.
- the storage unit 240 includes a storage medium such as a hard disk, an optical disk, a magnetic disk, or a memory card, and stores a plurality of music element string data.
- the music element string data may be, for example, MIDI (Musical Instrument Digital Interface) data.
- MIDI Musical Instrument Digital Interface
- the music element string data may be stored in the server on the network instead of the storage unit 240.
- the storage unit 240 may store the learning program instead of the ROM 220.
- the learning program is provided in a form stored in a computer-readable recording medium and may be installed in the ROM 220 or the storage unit 240. Further, when the learning system 200 is connected to the network, the learning program distributed from the server on the network may be installed in the ROM 220 or the storage unit 240.
- the operation unit 250 includes a pointing device such as a mouse or a keyboard, and is operated by the user to make a predetermined selection or designation.
- the display unit 260 includes, for example, a liquid crystal display, and displays a predetermined GUI (Graphical User Interface) in the learning process.
- the operation unit 250 and the display unit 260 may be configured by a touch panel display.
- FIG. 7 is a block diagram showing the configuration of the learning device 20. 8 and 9 are diagrams for explaining the operation of the learning device 20. Similar to FIGS. 3 to 5, in FIGS. 8 and 9, the musical element sequence is a melody.
- the learning device 20 includes an acquisition unit 21, a setting unit 22, and a construction unit 23.
- the functions of the acquisition unit 21, the setting unit 22, and the construction unit 23 are realized by the CPU 230 in FIG. 6 executing the learning program.
- At least a part of the acquisition unit 21, the setting unit 22, and the construction unit 23 may be realized by hardware such as an electronic circuit.
- the acquisition unit 21 acquires the music element string indicated by each music element string data stored in the storage unit 240 or the like.
- the music element string represented by the music element string data stored in the storage unit 240 or the like includes a plurality of music elements arranged in time series and does not include a blank portion.
- the setting unit 22 randomly sets a blank part as a mask in a part of each music element string acquired by the acquisition unit 21 according to a predetermined setting condition.
- the user can specify the mask setting condition by operating the GUI displayed on the display unit 260 using the operation unit 250.
- the mask setting conditions include the number of masks to be set, or the ratio of the length to which the mask should be set to the length of the music element string.
- the length of each mask on the time axis may be in note units or bar units.
- the construction unit 23 machine-learns the relationship between the music elements other than the mask portion and the music element of the mask portion in each music element sequence acquired by the acquisition unit 21, and thereby partially learns the music element and the music element of the mask portion. Build a learning model that shows the relationship with.
- the construction unit 23 performs machine learning using Transformer, but the embodiment is not limited to this.
- the construction unit 23 may perform machine learning using another method such as RNN (Recurrent Neural Network).
- the learning model is constructed so as to generate a music element that matches the mask part based on the music element located behind the mask part on the time axis in each music element sequence.
- the learning model constructed by the construction unit 23 is stored in the storage unit 140 of FIG.
- the learning model constructed by the construction unit 23 may be stored in a server or the like on the network.
- FIG. 10 is a flowchart showing an example of support processing by the support device 10 of FIG.
- the support process of FIG. 10 is performed by the CPU 130 of FIG. 1 executing a support program stored in the storage unit 140 or the like.
- the reception unit 11 receives a music element string including a blank portion of the music element as a part (step S1).
- the generation unit 12 generates a plurality of music elements that match the blank portion of the music element string received in step S1 by using the learning model constructed in step S15 of the learning process described later (step S2). .. Further, the generation unit 12 evaluates the goodness of fit of each music element generated in step S2 (step S3). Subsequently, the presentation unit 13 presents a predetermined number of musical elements generated in step S2 in the order of the goodness of fit evaluated in step S3 (step S4).
- the selection unit 14 determines whether or not any of the music elements generated in step S2 is designated (step S5). If no music element is specified, the selection unit 14 waits until any music element is specified. When any of the music elements is specified, the selection unit 14 selects the designated music element (step S6).
- the creating unit 15 creates a music element string that does not include the blank part of the music element by applying the music element selected in step S6 to the blank part of the music element string received in step S1 (. Step S7). This ends the support process.
- FIG. 11 is a flowchart showing an example of a learning process by the learning device 20 of FIG. 7.
- the learning process of FIG. 11 is performed by the CPU 230 of FIG. 7 executing a learning program stored in the storage unit 240 or the like.
- the acquisition unit 21 acquires a music element string that does not include a blank portion of the music element (step S11).
- the setting unit 22 randomly sets a mask in a part of the music element sequence acquired in step S11 (step S12).
- the construction unit 23 machine-learns the relationship between the music element other than the mask portion in the music element string acquired in step S11 and the music element of the mask portion set in step S12 (step S13). After that, the construction unit 23 determines whether or not the machine learning has been executed a predetermined number of times (step S14).
- Step S11 to S14 are repeated until a predetermined number of machine learnings are executed.
- the number of machine learning iterations is preset according to the accuracy of the learning model to be constructed.
- the construction unit 23 constructs a learning model showing the relationship between some music elements in the music element sequence and the music elements of the mask part based on the result of the machine learning (. Step S15). This ends the learning process.
- the support device 10 includes a music element string including a plurality of music elements arranged in time series and a blank portion of the music elements.
- a reception unit 11 that accepts music and a learning model that generates music elements of other parts from some music elements, a blank part based on the music element located behind the blank part on the time axis in the music element sequence. It is provided with a generation unit 12 for generating the music element of.
- the music element is based on the music element located behind the part on the time axis. Musical elements that fit the part are generated. This makes it possible to easily generate a musical element that reflects the user's intention.
- the generation unit 12 may generate a plurality of music elements that match the blank portion and evaluate the goodness of fit of each generated music element. In this case, it becomes easy to generate a music element sequence using a music element that fits the blank part more naturally.
- the support device 10 may further include a presentation unit 13 that presents a predetermined number of generated musical elements in the order of goodness of fit. In this case, the user can easily recognize the musical element having a relatively high goodness of fit.
- the support device 10 may further include a presentation unit 13 that presents a music element having a goodness of fit higher than a predetermined goodness of fit among the generated music elements. In this case, the user can easily recognize the musical element having a goodness of fit higher than the predetermined goodness of fit.
- the support device 10 may further include a selection unit 14 for selecting the music element having the highest goodness of fit among the generated music elements.
- a music element that reflects the user's intention can be automatically generated.
- the musical element sequence may include a melody, a chord progression, lyrics or a rhythm pattern. In this case, it becomes possible to easily generate a melody, chord progression, lyrics or rhythm pattern that reflects the user's intention.
- the learning device 20 has an acquisition unit 21 for acquiring a plurality of music element strings including a plurality of music elements arranged in time series, and a blank portion at a part of each music element string at random.
- an acquisition unit 21 for acquiring a plurality of music element strings including a plurality of music elements arranged in time series, and a blank portion at a part of each music element string at random.
- the relationship between the setting unit 22 to be set, the music element other than the blank part in each music element string, and the music element in the blank part the relationship between some music elements and the music element in the blank part is shown.
- It includes a construction unit 23 for constructing a learning model. In this case, it is possible to construct a learning model capable of generating a musical element that reflects the user's intention.
- the learning model is a mask portion based on a music element located behind the mask portion on the time axis in each music element sequence by the construction unit 23 of the learning device 20. It is constructed to generate musical elements that match. Therefore, the generation unit 12 of the support device 10 uses the learning model to generate a music element that fits the blank portion based on the music element located behind the blank portion on the time axis in the music element sequence.
- the learning model may be constructed by the construct 23 to generate music elements that match the mask portion based on the music elements located behind and in front of the mask portion on the time axis in each music element sequence. In this case, even if the generation unit 12 uses the learning model to generate music elements that match the blank portion based on the music elements located behind and in front of the blank portion on the time axis in the music element sequence. good. According to this configuration, it becomes possible to generate a musical element that fits the blank part more naturally.
- the generation unit 12 generates a plurality of music elements that match the blank portion and evaluates the goodness of fit of each generated music element, but the embodiment is not limited to this.
- the generation unit 12 may generate only one music element that matches the blank portion. In this case, the generation unit 12 does not have to evaluate the goodness of fit of the generated music element.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Electrically Operated Instructional Devices (AREA)
- Auxiliary Devices For Music (AREA)
- Electrophonic Musical Instruments (AREA)
Abstract
Description
図1は、本発明の一実施の形態に係る支援装置を含む音楽要素生成支援システムの構成を示すブロック図である。図1に示すように、音楽要素生成支援システム100(以下、支援システム100と略記する。)は、RAM(ランダムアクセスメモリ)110、ROM(リードオンリメモリ)120、CPU(中央演算処理装置)130、記憶部140、操作部150および表示部160を備える。
図2は、支援装置10の構成を示すブロック図である。図3~図5は、支援装置10の動作を説明するための図である。図3~図5では、音楽要素列はメロディである。そのため、音楽要素は、音符のピッチと、音符または休符の長さとを含む。
図6は、本発明の一実施の形態に係る学習装置を含む音楽要素学習システムの構成を示すブロック図である。図6に示すように、音楽要素学習システム200(以下、学習システム200と略記する。)は、RAM210、ROM220、CPU230、記憶部240、操作部250および表示部260を備える。
図7は、学習装置20の構成を示すブロック図である。図8および図9は、学習装置20の動作を説明するための図である。図3~図5と同様に、図8および図9では音楽要素列はメロディである。図7に示すように、学習装置20は、取得部21、設定部22および構築部23を含む。取得部21、設定部22および構築部23の機能は、図6のCPU230が学習プログラムを実行することにより実現される。取得部21、設定部22および構築部23の少なくとも一部が電子回路等のハードウエアにより実現されてもよい。
図10は、図2の支援装置10による支援処理の一例を示すフローチャートである。図10の支援処理は、図1のCPU130が記憶部140等に記憶された支援プログラムを実行することにより行われる。まず、受付部11は、音楽要素の空白部分を一部に含む音楽要素列を受け付ける(ステップS1)。
図11は、図7の学習装置20による学習処理の一例を示すフローチャートである。図11の学習処理は、図7のCPU230が記憶部240等に記憶された学習プログラムを実行することにより行われる。まず、取得部21は、音楽要素の空白部分を含まない音楽要素列を取得する(ステップS11)。次に、設定部22は、ステップS11で取得された音楽要素列の一部にマスクを無作為に設定する(ステップS12)。
以上説明したように、本実施の形態に係る支援装置10は、時系列的に配列された複数の音楽要素を含みかつ音楽要素の空白部分を含む音楽要素列を受け付ける受付部11と、一部の音楽要素から他の部分の音楽要素を生成する学習モデルを用いて、音楽要素列において時間軸上で空白部分よりも後方に位置する音楽要素に基づいて空白部分の音楽要素を生成する生成部12とを備える。
上記実施の形態において、学習モデルは、学習装置20の構築部23により各音楽要素列において時間軸上でマスク部分よりも後方に位置する音楽要素に基づいてマスク部分に適合する音楽要素を生成するように構築される。そのため、支援装置10の生成部12は、学習モデルを用いて、音楽要素列において時間軸上で空白部分よりも後方に位置する音楽要素に基づいて当該空白部分に適合する音楽要素を生成する。
Claims (12)
- 時系列的に配列された複数の音楽要素を含みかつ音楽要素の空白部分を含む音楽要素列を受け付ける受付部と、
一部の音楽要素から他の部分の音楽要素を生成する学習モデルを用いて、前記音楽要素列において時間軸上で前記空白部分よりも後方に位置する音楽要素に基づいて前記空白部分の音楽要素を生成する生成部とを備える、音楽要素生成支援装置。 - 前記生成部は、前記学習モデルを用いて、前記音楽要素列において時間軸上で前記空白部分よりも前方に位置する音楽要素にさらに基づいて前記空白部分の音楽要素を生成する、請求項1記載の音楽要素生成支援装置。
- 前記生成部は、前記空白部分に適合する音楽要素を複数生成し、生成された各音楽要素の適合度を評価する、請求項1または2記載の音楽要素生成支援装置。
- 生成された音楽要素を適合度の順に予め定められた数だけ提示する提示部をさらに備える、請求項3記載の音楽要素生成支援装置。
- 生成された音楽要素のうち、予め定められた適合度よりも高い適合度を有する音楽要素を提示する提示部をさらに備える、請求項3記載の音楽要素生成支援装置。
- 生成された音楽要素のうち、最も高い適合度を有する音楽要素を選択する選択部をさらに備える、請求項3記載の音楽要素生成支援装置。
- 前記音楽要素列は、メロディ、コード進行、歌詞またはリズムパターンを含む、請求項1~6のいずれか一項に記載の音楽要素生成支援装置。
- 時系列的に配列された複数の音楽要素を含む音楽要素列を複数取得する取得部と、
各音楽要素列の一部に空白部分を無作為に設定する設定部と、
各音楽要素列における前記空白部分以外の音楽要素と前記空白部分の音楽要素との関係を機械学習することにより、一部の音楽要素と前記空白部分の音楽要素との関係を示す学習モデルを構築する構築部とを備える、音楽要素学習装置。 - 時系列的に配列された複数の音楽要素を含みかつ音楽要素の空白部分を含む音楽要素列を受け付けるステップと、
一部の音楽要素から他の部分の音楽要素を生成する学習モデルを用いて、前記音楽要素列において時間軸上で前記空白部分よりも後方に位置する音楽要素に基づいて前記空白部分の音楽要素を生成するステップとを含む、音楽要素生成支援方法。 - 時系列的に配列された複数の音楽要素を含む音楽要素列を複数取得するステップと、
各音楽要素列の一部に空白部分を無作為に設定するステップと、
各音楽要素列における前記空白部分以外の音楽要素と前記空白部分の音楽要素との関係を機械学習することにより、一部の音楽要素と前記空白部分の音楽要素との関係を示す学習モデルを構築するステップとを含む、音楽要素学習方法。 - コンピュータに音楽要素生成支援方法を実行させるプログラムであって、
時系列的に配列された複数の音楽要素を含みかつ音楽要素の空白部分を含む音楽要素列を受け付ける処理と、
一部の音楽要素から他の部分の音楽要素を生成する学習モデルを用いて、前記音楽要素列において時間軸上で前記空白部分よりも後方に位置する音楽要素に基づいて前記空白部分の音楽要素を生成する処理とを、
前記コンピュータに実行させる、音楽要素生成支援プログラム。 - コンピュータに音楽要素学習方法を実行させるプログラムであって、
時系列的に配列された複数の音楽要素を含む音楽要素列を複数取得する処理と、
各音楽要素列の一部に空白部分を無作為に設定する処理と、
各音楽要素列における前記空白部分以外の音楽要素と前記空白部分の音楽要素との関係を機械学習することにより、一部の音楽要素と前記空白部分の音楽要素との関係を示す学習モデルを構築する処理とを、
前記コンピュータに実行させる、音楽要素学習プログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022565303A JPWO2022113907A1 (ja) | 2020-11-25 | 2021-11-19 | |
CN202180077995.XA CN116529809A (zh) | 2020-11-25 | 2021-11-19 | 音乐元素生成辅助装置、音乐元素学习装置、音乐元素生成辅助方法、音乐元素学习方法、音乐元素生成辅助程序以及音乐元素学习程序 |
US18/322,967 US20230298548A1 (en) | 2020-11-25 | 2023-05-24 | Musical element generation support device, musical element learning device, musical element generation support method, musical element learning method, non-transitory computer-readable medium storing musical element generation support program, and non-transitory computer-readable medium storing musical element learning program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020194991 | 2020-11-25 | ||
JP2020-194991 | 2020-11-25 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/322,967 Continuation US20230298548A1 (en) | 2020-11-25 | 2023-05-24 | Musical element generation support device, musical element learning device, musical element generation support method, musical element learning method, non-transitory computer-readable medium storing musical element generation support program, and non-transitory computer-readable medium storing musical element learning program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022113907A1 true WO2022113907A1 (ja) | 2022-06-02 |
Family
ID=81754603
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/042636 WO2022113907A1 (ja) | 2020-11-25 | 2021-11-19 | 音楽要素生成支援装置、音楽要素学習装置、音楽要素生成支援方法、音楽要素学習方法、音楽要素生成支援プログラムおよび音楽要素学習プログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US20230298548A1 (ja) |
JP (1) | JPWO2022113907A1 (ja) |
CN (1) | CN116529809A (ja) |
WO (1) | WO2022113907A1 (ja) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020003535A (ja) * | 2018-06-25 | 2020-01-09 | カシオ計算機株式会社 | プログラム、情報処理方法、電子機器、及び学習済みモデル |
JP2020042367A (ja) * | 2018-09-06 | 2020-03-19 | Awl株式会社 | 学習システム、サーバ、及び特徴量画像描画補間プログラム |
JP2020154951A (ja) * | 2019-03-22 | 2020-09-24 | 大日本印刷株式会社 | フォント選定装置及びプログラム |
-
2021
- 2021-11-19 CN CN202180077995.XA patent/CN116529809A/zh active Pending
- 2021-11-19 JP JP2022565303A patent/JPWO2022113907A1/ja active Pending
- 2021-11-19 WO PCT/JP2021/042636 patent/WO2022113907A1/ja active Application Filing
-
2023
- 2023-05-24 US US18/322,967 patent/US20230298548A1/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020003535A (ja) * | 2018-06-25 | 2020-01-09 | カシオ計算機株式会社 | プログラム、情報処理方法、電子機器、及び学習済みモデル |
JP2020042367A (ja) * | 2018-09-06 | 2020-03-19 | Awl株式会社 | 学習システム、サーバ、及び特徴量画像描画補間プログラム |
JP2020154951A (ja) * | 2019-03-22 | 2020-09-24 | 大日本印刷株式会社 | フォント選定装置及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
JPWO2022113907A1 (ja) | 2022-06-02 |
CN116529809A (zh) | 2023-08-01 |
US20230298548A1 (en) | 2023-09-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11562722B2 (en) | Cognitive music engine using unsupervised learning | |
US5736666A (en) | Music composition | |
US11699420B2 (en) | Music composition aid | |
US20200168194A1 (en) | Automated music composition and generation system driven by lyrical input | |
Cope | Experiments in musical intelligence (EMI): Non‐linear linguistic‐based composition | |
JP3557917B2 (ja) | 自動作曲装置および記憶媒体 | |
JP2020003535A (ja) | プログラム、情報処理方法、電子機器、及び学習済みモデル | |
Sullivan et al. | Stability, Reliability, Compatibility: Reviewing 40 Years of NIME Design | |
US8847054B2 (en) | Generating a synthesized melody | |
WO2022113907A1 (ja) | 音楽要素生成支援装置、音楽要素学習装置、音楽要素生成支援方法、音楽要素学習方法、音楽要素生成支援プログラムおよび音楽要素学習プログラム | |
Garani et al. | An algorithmic approach to South Indian classical music | |
US10431191B2 (en) | Method and apparatus for analyzing characteristics of music information | |
US20220383843A1 (en) | Arrangement generation method, arrangement generation device, and generation program | |
JP6496998B2 (ja) | 演奏情報編集装置および演奏情報編集プログラム | |
JP3835456B2 (ja) | 自動作曲装置および記憶媒体 | |
KR100710709B1 (ko) | 전자 악보 모듈 | |
US20200312286A1 (en) | Method for music composition embodying a system for teaching the same | |
Vargas et al. | Artificial musical pattern generation with genetic algorithms | |
Chang et al. | Contrapuntal composition and autonomous style development of organum motets by using AntsOMG | |
WO2022244403A1 (ja) | 楽譜作成装置、訓練装置、楽譜作成方法および訓練方法 | |
CN112992106B (zh) | 基于手绘图形的音乐创作方法、装置、设备及介质 | |
JP2004258562A (ja) | 歌唱合成用データ入力プログラムおよび歌唱合成用データ入力装置 | |
Židek | Controlled music generation with deep learning | |
WO2022145145A1 (ja) | 情報処理装置、情報処理方法及び情報処理プログラム | |
McFarland | Dave Brubeck and Polytonal Jazz |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21897880 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180077995.X Country of ref document: CN |
|
ENP | Entry into the national phase |
Ref document number: 2022565303 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21897880 Country of ref document: EP Kind code of ref document: A1 |