EP2043089A1 - Method and device for humanizing music sequences - Google Patents

Method and device for humanizing music sequences Download PDF

Info

Publication number
EP2043089A1
EP2043089A1 EP07117541A EP07117541A EP2043089A1 EP 2043089 A1 EP2043089 A1 EP 2043089A1 EP 07117541 A EP07117541 A EP 07117541A EP 07117541 A EP07117541 A EP 07117541A EP 2043089 A1 EP2043089 A1 EP 2043089A1
Authority
EP
European Patent Office
Prior art keywords
music
time
music sequence
sequence
humanizing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP07117541A
Other languages
German (de)
French (fr)
Other versions
EP2043089B1 (en
Inventor
Holger Hennig
Ragnar Fleischmann
Fabian Theis
Theo Geisel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Max Planck Gesellschaft zur Foerderung der Wissenschaften eV
Original Assignee
Max Planck Gesellschaft zur Foerderung der Wissenschaften eV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Max Planck Gesellschaft zur Foerderung der Wissenschaften eV filed Critical Max Planck Gesellschaft zur Foerderung der Wissenschaften eV
Priority to EP07117541A priority Critical patent/EP2043089B1/en
Publication of EP2043089A1 publication Critical patent/EP2043089A1/en
Application granted granted Critical
Publication of EP2043089B1 publication Critical patent/EP2043089B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0033Recording/reproducing or transmission of music for electrophonic musical instruments
    • GPHYSICS
    • G04HOROLOGY
    • G04FTIME-INTERVAL MEASURING
    • G04F5/00Apparatus for producing preselected time intervals for use as timing standards
    • G04F5/02Metronomes
    • G04F5/025Electronic metronomes
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/40Rhythm
    • G10H1/42Rhythm comprising tone forming circuits
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/155Musical effects
    • G10H2210/161Note sequence effects, i.e. sensing, altering, controlling, processing or synthesising a note trigger selection or sequence, e.g. by altering trigger timing, triggered note values, adding improvisation or ornaments, also rapid repetition of the same note onset, e.g. on a piano, guitar, e.g. rasgueado, drum roll
    • G10H2210/165Humanizing effects, i.e. causing a performance to sound less machine-like, e.g. by slightly randomising pitch or tempo
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/341Rhythm pattern selection, synthesis or composition
    • G10H2210/356Random process used to build a rhythm pattern
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2250/00Aspects of algorithms or signal processing methods without intrinsic musical character, yet specifically adapted for or used in electrophonic musical processing
    • G10H2250/131Mathematical functions for musical analysis, processing, synthesis or composition
    • G10H2250/211Random number generators, pseudorandom generators, classes of functions therefor
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2250/00Aspects of algorithms or signal processing methods without intrinsic musical character, yet specifically adapted for or used in electrophonic musical processing
    • G10H2250/295Noise generation, its use, control or rejection for music processing
    • G10H2250/301Pink 1/f noise or flicker noise

Definitions

  • the present invention relates to a method and a device for humanizing music sequences.
  • it relates to humanizing drum sequences.
  • Beats divide the time axis of a piece of music or a musical sequence by impulses or pulses.
  • the beat is intimately tied to the meter (metre) of the music as it designates that level of the meter (metre) that is particularly important, e.g. for the perceived tempo of the music.
  • a well-known instrument for determining the beat of a musical sequence is a metronome.
  • a metronome is any device that produces a regulated audible and/or visual pulse, usually used to establish a steady beat, or tempo, measured in beats-per-minute (BPM) for the performance of musical compositions. Ideally, the pulses are equidistant.
  • a sound may correspond to a note or a beat played by an instrument. Each sound has a temporal occurrence t within the music sequence.
  • Figure 1 shows a plot of a natural drum signal or beat compared with a metronome signal. Compared to a real audio signal, the plot is stylized for the purpose of describing the present invention, which only pertains to the temporal occurrence patterns of sounds. The skilled person will immediately recognize that in reality, each beat or note played is composed of an onset, an attack and a decay phase from which the present description abstracts.
  • the human drummer's beats occur on times t' 1 , t' 2 and t ' 3 and constitute an irregular sequence.
  • the above definitions may also be generalized in order to track deviations of a sequence from a given metric pattern instead from a metronome.
  • a more complex metronome signal can be generated wherein distances between clicks are not equal but are distributed according to a more complex pattern.
  • the pattern may correspond to a particular rhythm.
  • the offsets of human drum sequences may be described by Gaussian distributed 1 / f ⁇ noise, where f is a frequency and ⁇ is a shape parameter of the spectrum.
  • this kind of noise is also referred to as 'pink noise'.
  • the parameter ⁇ is then equivalent to the absolute value of the slope of the graph.
  • the parameter ⁇ may be estimated empirically by comparing the beat sequence generated by a human drum player (or several of them) with a metronome. More particularly, the temporal differences between the human and the artificial beats correspond to the off sets o i of figure 1 and the estimation of ⁇ may be carried out by performing a linear regression on the offsets' power spectral frequency plot, wherein the frequency axis has been transformed by two logarithmic transformations for linearization.
  • drums have been chosen because in the analysis, the distinction between accentuation and errors is easiest when analyzing sequences that contain time-periodic structures, such as drum sequences.
  • the methods according to the invention may also be applied to other instruments played by humans. For example, for a piano player playing a song on the piano, it is expectable that after removal of accentuation, the relevant noise obeys the same 1 / f ⁇ -law as discussed above with respect to drums.
  • FIG 3 shows a flowchart of a method for humanizing music sequences according to a first embodiment of the invention.
  • the music sequence is assumed to comprise a series of sounds, which may be notes, played by an instrument such as a drum, each occurring on a distinct time t .
  • the time t may be taken as the onset of the note, which may automatically be detected by a method in the prior art (cf. e.g. Bello et al., A tutorial on Onset Detection In Music Signals, IEEE Transactions on Speech and Audio Processing, Vol. 13, No. 5, September 2005 ).
  • step 310 the method is initialized.
  • step 320 a random offset o i is generated for the present sound or note at time t i .
  • step 330 the random offset o i is added to the time t i in order to obtain a modified time t ' i .
  • the offset o i may also be negative.
  • step 340 the present sound s i is output at the modified time t' i .
  • the outputting step may comprise playing the sound in an audio device. It may also comprise storing the sound on a medium, at the modified time t' I for later playing.
  • step 350 the procedure loops back to step 320 in order to repeat the procedure for the remaining sounds.
  • the random offsets are generated such that their power spectral density obeys the law 1 f ⁇ . wherein ⁇ > 0.
  • the parameter ⁇ may be set according to the empirical estimates obtained as described in relation to figure 2 .
  • Figure 4 shows a block diagram of a device 400 for humanizing a music sequence according to an embodiment of the invention.
  • the music sequence (S) comprises a multitude of sounds (s 1 ... s n ) occurring on times (t 1 , ..., t n ).
  • the device may comprise means 410 for generating, for each time (t i ) a random offset (o i ).
  • the device may further comprise means 420 for adding the random offset (o i ) to the time (t i ) in order to obtain a modified time (t i + o i ).
  • the device may also comprise means 430 for outputting a humanized music sequence (S') wherein each sound (s i ) occurs on the modified time (t i + o i ).
  • the power spectral density of the random offsets has the form 1 f ⁇ , wherein 0 ⁇ ⁇ ⁇ 2.
  • Generators for 1/2 ⁇ - or 'pink' noise are commercially available.
  • Figure 5 shows another block diagram of a device for humanizing music sequences according to another embodiment of the invention.
  • the device comprises a metronome 510, a noise generator 520, a module 530 for adding the random offsets to obtain a modified time sequence, a module 540 for outputting the sounds at the modified times, a module 550 for receiving an input sequence and a module 560 for analyzing the input sequence in order to automatically identify the relevant sounds.
  • the deviation of human drum sequences from a given metronome may be well described by Gaussian distributed 1/f ⁇ noise, wherein the exponent ⁇ is distinct from 0.
  • the results do also apply to other instruments played by humans.
  • the method and device for humanizing musical sequence may very well be applied in the field of electronic music as well as for post processing real recordings.
  • 1/f ⁇ -noise is the natural choice for humanizing a given music sequence.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • General Physics & Mathematics (AREA)
  • Electrophonic Musical Instruments (AREA)
  • Auxiliary Devices For Music (AREA)

Abstract

A method for humanizing a music sequence (S), the music sequence (S) comprising a multitude of sounds (s1, ..., sn) occurring on times (t1, ...,tn), comprises the steps
- generating, for each time (ti) a random offset (oi),
- adding the random offset (oi) to the time (ti) in order to obtain a modified time (ti + oi); and
- outputting a humanized music sequence (S') wherein each sound (si) occurs on the modified time (ti + oi).
According to the invention, the power spectral density of the random offsets has the form
Figure imga0001
wherein 0< α < 2.

Description

  • The present invention relates to a method and a device for humanizing music sequences. In particular, it relates to humanizing drum sequences.
  • TECHNICAL BACKGROUND AND PRIOR ART
  • Large parts of existing music are characterized by a sequence of stressed and unstressed beats (often called "strong" and "weak"). Beats divide the time axis of a piece of music or a musical sequence by impulses or pulses. The beat is intimately tied to the meter (metre) of the music as it designates that level of the meter (metre) that is particularly important, e.g. for the perceived tempo of the music.
  • A well-known instrument for determining the beat of a musical sequence is a metronome. A metronome is any device that produces a regulated audible and/or visual pulse, usually used to establish a steady beat, or tempo, measured in beats-per-minute (BPM) for the performance of musical compositions. Ideally, the pulses are equidistant.
  • However, humans performing music will never exactly match the beat given by a metronome. Instead, music performed by humans will always exhibit a certain amount of fluctuations compared with the steady beat of a metronome. Machine-generated music on the other hand, such as an artificial drum sequence, has no difficulty in always keeping the exact beat, as synthesizers and computers are equipped with ultra precise clocking mechanisms.
  • But machine-generated music, an artificial drum sequence in particular, is often recognizable just for this perfection and frequently devalued by audiences due to a perceived lack of human touch. The same holds true for music performed by humans which is recorded and then undergoes some kind of analogue or digital editing. Post-processing is a standard procedure in contemporary music production, e.g. for the purpose of enhancing human performed music having shortcomings due to a lack of performing skills or inadequate instruments, etc. Here also, even music originally performed by humans may acquire an undesired artificial touch.
  • Therefore, there exists a desire to generate or modify music on a machine that sounds more natural.
  • SUMMARY OF THE INVENTION
  • It is therefore an object of the present invention to provide a method and a device for generating or modifying music sequences having a more human touch.
  • This object is achieved according to the invention of by a method and a device according to the independent claims. Advantageous embodiments are defined in the dependent claims.
  • The term sound to which the claims refer is defined herein as a subsequence of a music sequence. In some embodiments, a sound may correspond to a note or a beat played by an instrument. Each sound has a temporal occurrence t within the music sequence.
  • Preliminary results of empirical experiments carried out by the inventors strongly indicate that a rhythm comprising a natural random fluctuation as generated according to the invention sounds much better or more natural to people than the same rhythm comprising a fluctuation due to Gaussian or uniformly distributed white noise with the same standard deviation, even when using Gaussian instead of uniform white noise.
  • BRIEF DESCRIPTION OF THE FIGURES
  • These and further aspect and advantages of the present invention will become more apparent when studying the following detailed description of the invention, in connection with the attached drawing in which
  • Fig. 1
    shows a plot of a natural drum signal or beat compared with a metronome signal;
    Fig. 2
    shows the spectrum of pink noise graphed double logarithmically;
    Fig. 3
    shows a flowchart of a method according to an embodiment of the invention;
    Fig. 4
    shows a block diagram of a device for humanizing music sequences according to an embodiment of the invention; and
    Fig. 5
    shows another block diagram of a device for humanizing music sequences according to another embodiment of the invention.
    DETAILED DESCRIPTION OF THE INVENTION
  • Figure 1 shows a plot of a natural drum signal or beat compared with a metronome signal. Compared to a real audio signal, the plot is stylized for the purpose of describing the present invention, which only pertains to the temporal occurrence patterns of sounds. The skilled person will immediately recognize that in reality, each beat or note played is composed of an onset, an attack and a decay phase from which the present description abstracts.
  • The beats of the metronome occur on times t1, t2 and t3 and constitute a regular sequence of the form t n = t 0 + nT ,
    Figure imgb0001

    wherein tn is the temporal occurrence or time of the n-th beat, t 0 is the time of the initial beat and T denotes the time between metronome clicks.
  • The human drummer's beats occur on times t'1, t'2 and t'3 and constitute an irregular sequence. The offsets oi between the beats may be calculated as O n = t n - n .
    Figure imgb0002
  • Alternatively, the above definitions may also be generalized in order to track deviations of a sequence from a given metric pattern instead from a metronome. In other words, instead of taking regular distances T for the metronome clicks, a more complex metronome signal can be generated wherein distances between clicks are not equal but are distributed according to a more complex pattern. In particular, the pattern may correspond to a particular rhythm.
  • Now, according to empirical investigations of the inventors, the offsets of human drum sequences may be described by Gaussian distributed 1/fα noise, where f is a frequency and α is a shape parameter of the spectrum.
  • Figure 2 shows an example of a random signal whose power spectral density is equal to 1/fα, wherein α = 1, graphed double logarithmically. Within the scientific literature, this kind of noise is also referred to as 'pink noise'. The parameter α is then equivalent to the absolute value of the slope of the graph.
  • With regard to the invention, in particular with respect to human drumming, the parameter α may be estimated empirically by comparing the beat sequence generated by a human drum player (or several of them) with a metronome. More particularly, the temporal differences between the human and the artificial beats correspond to the off sets oi of figure 1 and the estimation of α may be carried out by performing a linear regression on the offsets' power spectral frequency plot, wherein the frequency axis has been transformed by two logarithmic transformations for linearization.
  • Experiments carried out by the inventors using own recordings of the inventors as well as recordings of drummers provided by professional recording studios revealed that the exponent α appears to be widely independent of the drummer. The parameter α also clearly appears to be greater than zero (0). Also, it appears to be smaller than 2.0 in general. For drumming, it has been determined as being smaller than 1.5 in general. However, the offsets of different human drummers may differ in standard deviation and mean.
  • For the empirical analysis, drums have been chosen because in the analysis, the distinction between accentuation and errors is easiest when analyzing sequences that contain time-periodic structures, such as drum sequences. However, in principle, the methods according to the invention may also be applied to other instruments played by humans. For example, for a piano player playing a song on the piano, it is expectable that after removal of accentuation, the relevant noise obeys the same 1/f α-law as discussed above with respect to drums.
  • Based on these empirically determined facts and figures, a method and a device for humanizing music, in particular drum sequences may now be described as follows.
  • Figure 3 shows a flowchart of a method for humanizing music sequences according to a first embodiment of the invention. The music sequence is assumed to comprise a series of sounds, which may be notes, played by an instrument such as a drum, each occurring on a distinct time t. When humanizing real audio signals, the time t may be taken as the onset of the note, which may automatically be detected by a method in the prior art (cf. e.g. Bello et al., A Tutorial on Onset Detection In Music Signals, IEEE Transactions on Speech and Audio Processing, Vol. 13, No. 5, September 2005).
  • In step 310, the method is initialized. In particular, the algorithm may be set to the first time to (i = 0).
  • In step 320, a random offset oi is generated for the present sound or note at time ti.
  • In step 330, the random offset oi is added to the time ti in order to obtain a modified time t' i . Hereby, it is understood that the offset oi may also be negative.
  • In step 340, the present sound si is output at the modified time t'i . The outputting step may comprise playing the sound in an audio device. It may also comprise storing the sound on a medium, at the modified time t'I for later playing.
  • In step 350, the procedure loops back to step 320 in order to repeat the procedure for the remaining sounds.
  • According to the invention, the random offsets are generated such that their power spectral density obeys the law 1 f α .
    Figure imgb0003

    wherein α > 0.
  • The parameter α may be set according to the empirical estimates obtained as described in relation to figure 2.
  • Figure 4 shows a block diagram of a device 400 for humanizing a music sequence according to an embodiment of the invention.
  • Again, it is assumed that the music sequence (S) comprises a multitude of sounds (s1... sn) occurring on times (t1, ..., tn). According to one embodiment of the invention, the device may comprise means 410 for generating, for each time (ti) a random offset (oi).
  • The device may further comprise means 420 for adding the random offset (oi) to the time (ti) in order to obtain a modified time (ti + oi).
  • Finally, the device may also comprise means 430 for outputting a humanized music sequence (S') wherein each sound (si) occurs on the modified time (ti + oi).
  • According to the invention, the power spectral density of the random offsets has the form 1 f α ,
    Figure imgb0004

    wherein 0 < α < 2. Generators for 1/2α- or 'pink' noise are commercially available.
  • Figure 5 shows another block diagram of a device for humanizing music sequences according to another embodiment of the invention. The device comprises a metronome 510, a noise generator 520, a module 530 for adding the random offsets to obtain a modified time sequence, a module 540 for outputting the sounds at the modified times, a module 550 for receiving an input sequence and a module 560 for analyzing the input sequence in order to automatically identify the relevant sounds.
  • SUMMARY
  • The deviation of human drum sequences from a given metronome may be well described by Gaussian distributed 1/fα noise, wherein the exponent α is distinct from 0. In principle, the results do also apply to other instruments played by humans. In conclusion, the method and device for humanizing musical sequence may very well be applied in the field of electronic music as well as for post processing real recordings. In other words, 1/fα-noise is the natural choice for humanizing a given music sequence.

Claims (8)

  1. Method for humanizing a music sequence (S), the music sequence (S) comprising a multitude of sounds (s1, ..., sn) occurring on times (t1, ...,tn), comprising the steps
    - generating, for each time (ti) a random offset (oi),
    - adding the random offset (oi) to the time (ti) in order to obtain a modified time (ti + oi); and
    - outputting a humanized music sequence (S') wherein each sound (si) occurs on the modified time (ti + oi),
    characterised in that the power spectral density of the random offsets has the form 1 f α .
    Figure imgb0005

    wherein 0< α < 2.
  2. Method according to claim 1, wherein the sounds correspond to drum beats.
  3. Method according to claim 1, wherein the sounds correspond to notes played by a piano.
  4. Method according to claim 1, wherein the music sequence (S) is obtained from editing a human-generated music sequence.
  5. Method according to claim 1, wherein the mean and/or the standard deviation of the offsets (oi) is set according to empirical estimates.
  6. Music sequence (S), comprising a multitude of sounds (s1, ..., sn) occurring on times (t'1, ...,t'n), wherein the times are offset with offsets (o1, ...,on) against the clicks (c1,...,cn) of a metronome, wherein the power spectral density of the offsets (o1, ...,on) has the form 1 f α .
    Figure imgb0006

    wherein 0< α < 2.
  7. Machine readable medium, comprising a humanized music sequence according to claim 5.
  8. Device for humanizing a music sequence (S), the music sequence (S) comprising a multitude of sounds (s1, ..., sn) occurring on times (t1, ... ,tn), comprising:
    - means for generating, for each time (ti) a random offset (oi),
    - means for adding the random offset (oi) to the time (ti) in order to obtain a modified time (ti + oi); and
    - means for outputting a humanized music sequence (S') wherein each sound (si) occurs on the modified time (ti + oi),
    characterised in that the power spectral density of the random offsets has the form 1 f α .
    Figure imgb0007

    wherein 0< α < 2.
EP07117541A 2007-09-28 2007-09-28 Method and device for humanizing music sequences Active EP2043089B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP07117541A EP2043089B1 (en) 2007-09-28 2007-09-28 Method and device for humanizing music sequences

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP07117541A EP2043089B1 (en) 2007-09-28 2007-09-28 Method and device for humanizing music sequences

Publications (2)

Publication Number Publication Date
EP2043089A1 true EP2043089A1 (en) 2009-04-01
EP2043089B1 EP2043089B1 (en) 2012-11-14

Family

ID=38859055

Family Applications (1)

Application Number Title Priority Date Filing Date
EP07117541A Active EP2043089B1 (en) 2007-09-28 2007-09-28 Method and device for humanizing music sequences

Country Status (1)

Country Link
EP (1) EP2043089B1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3974729A (en) * 1974-03-02 1976-08-17 Nippon Gakki Seizo Kabushiki Kaisha Automatic rhythm playing apparatus
US6066793A (en) * 1997-04-16 2000-05-23 Yamaha Corporation Device and method for executing control to shift tone-generation start timing at predetermined beat
US6506969B1 (en) * 1998-09-24 2003-01-14 Medal Sarl Automatic music generating method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3974729A (en) * 1974-03-02 1976-08-17 Nippon Gakki Seizo Kabushiki Kaisha Automatic rhythm playing apparatus
US6066793A (en) * 1997-04-16 2000-05-23 Yamaha Corporation Device and method for executing control to shift tone-generation start timing at predetermined beat
US6506969B1 (en) * 1998-09-24 2003-01-14 Medal Sarl Automatic music generating method and device

Also Published As

Publication number Publication date
EP2043089B1 (en) 2012-11-14

Similar Documents

Publication Publication Date Title
US7485797B2 (en) Chord-name detection apparatus and chord-name detection program
JP4660739B2 (en) Sound analyzer and program
US7579546B2 (en) Tempo detection apparatus and tempo-detection computer program
US7250566B2 (en) Evaluating and correcting rhythm in audio data
JP5454317B2 (en) Acoustic analyzer
US20230402026A1 (en) Audio processing method and apparatus, and device and medium
US9147388B2 (en) Automatic performance technique using audio waveform data
US7777123B2 (en) Method and device for humanizing musical sequences
JP2009031486A (en) Method, apparatus, and program for evaluating similarity of performance sound
JP4613923B2 (en) Musical sound processing apparatus and program
US20080262836A1 (en) Pitch estimation apparatus, pitch estimation method, and program
US8766078B2 (en) Music piece order determination device, music piece order determination method, and music piece order determination program
US20210366454A1 (en) Sound signal synthesis method, neural network training method, and sound synthesizer
Jonason The control-synthesis approach for making expressive and controllable neural music synthesizers
US9064485B2 (en) Tone information processing apparatus and method
EP2043089B1 (en) Method and device for humanizing music sequences
Szeto et al. Source separation and analysis of piano music signals using instrument-specific sinusoidal model
Hastuti et al. Natural automatic musical note player using time-frequency analysis on human play
Fonseca et al. Low-latency f0 estimation for the finger plucked electric bass guitar using the absolute difference function
JP4625934B2 (en) Sound analyzer and program
JP4625935B2 (en) Sound analyzer and program
Kreutzer et al. TIME DOMAIN ATTACK AND RELEASE MODELING
Godsill Computational modeling of musical signals
Kreutzer et al. Time domain attack and release modeling-applied to spectral domain sound synthesis
Roig et al. Rumbator: A flamenco rumba cover version generator based on audio processing at note-level

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC MT NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK RS

17P Request for examination filed

Effective date: 20090317

R17P Request for examination filed (corrected)

Effective date: 20080317

AKX Designation fees paid

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC MT NL PL PT RO SE SI SK TR

17Q First examination report despatched

Effective date: 20090317

17Q First examination report despatched

Effective date: 20101001

GRAC Information related to communication of intention to grant a patent modified

Free format text: ORIGINAL CODE: EPIDOSCIGR1

GRAC Information related to communication of intention to grant a patent modified

Free format text: ORIGINAL CODE: EPIDOSCIGR1

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC MT NL PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

Ref country code: AT

Ref legal event code: REF

Ref document number: 584343

Country of ref document: AT

Kind code of ref document: T

Effective date: 20121115

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602007026654

Country of ref document: DE

Effective date: 20130103

REG Reference to a national code

Ref country code: NL

Ref legal event code: VDEP

Effective date: 20121114

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 584343

Country of ref document: AT

Kind code of ref document: T

Effective date: 20121114

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121114

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121114

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130225

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121114

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121114

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121114

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130314

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121114

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121114

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130215

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121114

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121114

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121114

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130214

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121114

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121114

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121114

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121114

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121114

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121114

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20130815

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602007026654

Country of ref document: DE

Effective date: 20130815

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121114

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20130928

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20130930

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20130930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121114

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121114

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20070928

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20130928

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121114

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 10

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 11

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 12

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20230921

Year of fee payment: 17

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230918

Year of fee payment: 17

Ref country code: DE

Payment date: 20230828

Year of fee payment: 17