WO1992013334A1 - Performance controller for music synthesizing system - Google Patents

Performance controller for music synthesizing system Download PDF

Info

Publication number
WO1992013334A1
WO1992013334A1 PCT/US1991/000653 US9100653W WO9213334A1 WO 1992013334 A1 WO1992013334 A1 WO 1992013334A1 US 9100653 W US9100653 W US 9100653W WO 9213334 A1 WO9213334 A1 WO 9213334A1
Authority
WO
WIPO (PCT)
Prior art keywords
voice
control means
value
performance
parameter
Prior art date
Application number
PCT/US1991/000653
Other languages
French (fr)
Inventor
Mark Zamcheck
Louis H. Auerbach
Original Assignee
Mark Zamcheck
Auerbach Louis H
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mark Zamcheck, Auerbach Louis H filed Critical Mark Zamcheck
Priority to PCT/US1991/000653 priority Critical patent/WO1992013334A1/en
Publication of WO1992013334A1 publication Critical patent/WO1992013334A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/18Selecting circuits

Definitions

  • the invention relates to electronic music systems, and more particularly to a control of musical devices which allows a real time variation of the musical charac ⁇ teristics of these devices.
  • This invention is more specifically directed to synthesizers or computer generated music.
  • MIDI stream an acronym for "Musical Instrument Digital Interface”.
  • a communications line is established between a sound generating unit and one or more control signal generating units, which may, for example, be proprietary electronic keyboards or specially programmed digital computers.
  • the communications line is a current loop along which messages are trans ⁇ mitted in a fixed digital format at a 31.250 KHz asynchronous message rate. All MIDI communication, except certain system exclusive communications, is achieved through multi-byte messages consisting of one Status byte followed by one or two Data bytes.
  • a MIDI message may contain, in a specified order, a channel identifier or certain system specific data (SYS EX), and a code for a specific "parameter" fol ⁇ lowed by a value between 0 - 127 for the parameter.
  • SYS EX system specific data
  • Each parameter governs an at ⁇ tribute, such as volume, frequency, envelope attach or decay, or the like which con ⁇ trols the generation, shaping or modulation of an electrical drive signal provided to a sound transducer.
  • these SYS EX messages may contain any number of Data bytes, and are terminated by an End of Exclusive (EOX) or any other Status byte.
  • EOX End of Exclusive
  • mes ⁇ sages include a Manufacturers Identification (ID) Code, which enables specific ma ⁇ chines of that manufacture to recognize the ensuing message data.
  • ID Manufacturers Identification
  • Deutsch et al 4,823,667 discloses a guitar con ⁇ trolled instrument.
  • the end purpose of this instrument is to provide 'note data' to a synthesizer by detecting which strings of a guitar has been played.
  • Steven C. Marshall 4,748,887 also discloses another form of guitar controller which addresses MIDI parameters. This patent, as are the rest, is mainly concerned with control of factors such as pitch bend and note data.
  • .Akio Iba 4,817,484 teaches a controller which again pro ⁇ vides means of detecting and transmitting MIDI performance data, and in addition provides a means to remotely (at the controller), performing what is known in the art as a patch change.
  • Ml synthesizers have a number of selectable base voices or patches built in, which may be selected by means of switches on the device or by means of MIDI patch change messages.
  • a voice or patch is the fixed set of all voice parameters within a synthesizer. This patent teaches on a method to produce these patch changes from the controller.
  • Computer generated music devices normally have a wide variety of sounds which can be selected by means of switches or external devices. These sounds are typically called 'voices' or 'patches'.
  • the creation of a voice is a complex program ⁇ ming task which is normally done by professional musical programmers. Voices are produced by selecting and changing parameters within the device. These parameters are typically referred to as voice parameters. All computer generating musical devices have a set of these parameters regardless of the actual technique used for generating the sound. The functions of these parameters are different for FM synthesis then they are for PCM synthesis or Linear .Arithmetic synthesis.
  • the num ⁇ ber of voice parameters in a synthesizer can vary from as few as 150 to over 300 depending on the synthesizer type.
  • the present invention is directed to apparatus and computer software which allows a synthesizer, or other computer generated musical device, to be controlled in real time i.e. from instant to instant, in a manner more characteristic of true acoustical instruments.
  • the control and method of producing the desired results is accomplished by means of a MIDI inter ⁇ face connection between the apparatus of this invention and one or more music gen ⁇ erating devices.
  • a MIDI interface is not a limitation on the application of the invention. It is a natural extension of this disclosure to incorporate the techniques of this invention directly in the musical device eliminating the necessity of the MIDI in ⁇ terface. This incorporation into the music generating device eliminates the commu ⁇ nication delays associated with the MIDI implementation and is contemplated as a desirable implementation.
  • the mechanism of the invention makes use of the voice parameters of a music generating instrument, not to edit a voice as is done in the present art, but rather to provide a complex interplay of voice parameter modification which is un ⁇ der control of the performing artist, on an instant to instant basis.
  • voice parameters are differentiated from the normal performance parameters by the requirement to access these parameters by means of SysEx messages as opposed to control messages.
  • SysEx messages as opposed to control messages.
  • the complex interplay of parameters may occur because there is no direct relationship between the varia ⁇ tion of specific voice parameters and the musical parameters with which an artist or performer normally deals.
  • the reproduction of the musical characteristic 'bowing' for a stringed instrument requires an entirely different set of parameters and inter ⁇ actions for FM synthesis than for Linear .Arithmetic synthesis.
  • the use of this inven ⁇ tion provides a translation from this interplay of voice parameters, in a transparent way, into controls which are related to the musical variation normally expected of the basic starting voice or voices of the instrument or instruments.
  • the translation, from individual parameters to a musical description for any synthesis type, is called a metapatch tm program.
  • a patch is a set of voice parameters used to produce a specific voice within the instrument
  • a metapatch 1111 program produces a spectrum of voices with a dimemsionality equal to the number of variable controls on the ap ⁇ paratus.
  • a metapatch to program in a generic way, translates the operations normally performed by a specialist (the musi ⁇ cal programmer) to a form which the user (artist or performer) may employ intui ⁇ tively over a continuous range of values.
  • control devices may be of any type which produce a measure or value of a desired instantaneous rela ⁇ tionship between the control devices or signals and the voice parameter values.
  • control devices may be sliders or faders which produce a voltage proportional to the position of the slider or fader.
  • Conver ⁇ sion means are used to convert this voltage to a digital value which defines the posi ⁇ tion of the slider or fader.
  • control devices in a second form of the device, the control devices my be rotary encoders which produce incremental or absolute values of the rotation angle of the encoder.
  • incremental encoder allows the apparatus to automatically reset the apparent position of the controller to the base voice when desired.
  • control means may be any one of the above means and the control signals may be of the type known as 'continuous controller' signals.
  • the linked relationship between the control values and the parameters may be obtained by using look up tables (LUTs) where the look up index is the control value and the contents is the desired parameter value.
  • LUTs look up tables
  • This method requires a sepa ⁇ rate contents value for each parameter which is to be modified by a particular con ⁇ trol device or signal and for each index value of the control device or signal.
  • a sec ⁇ ond method is to define a mathematical relationship between the control device and voice parameters. Combination of these methods may also be used.
  • the output of the ap ⁇ paratus is connected to the MIDI IN port of the music generating computer.
  • a MIDI IN port of the performance controller is connected to the MIDI OUT port of the music generating device and/or other MIDI devices.
  • the voice parameter values for the new position of the control device are derived, converted to SysEx data for the receiving music generating device or devices, and transmitted over a MIDI cable to these devices.
  • all voice or other parameters of the receiving device are changed immediately to values which are related to the position of the control device.
  • the performance controller also receives MIDI data by means of an IN port.
  • This IN port may receive messages known as 'continuous controller' data from other MIDI devices.
  • This continuous controller data is converted to a control index, the same as a control device within the apparatus, the voice parameters derived, converted to SysEx messages, and transmitted to the receiving device or devices.
  • the instantaneous value of pa ⁇ rameters within the receiving device or devices are changed immediately in response to control signals from other MIDI devices.
  • the performance controller may also receive, by means of it's IN port, messages known as 'dump data' which may used to determine the current values of the voice parameters within the controlled music generating device. These dump values may then be used to modify the values which relate control values to voice parameter values so that a set of relationships derive between control values and parameter values for one base voice (patch) may be used with another base voice (patch). Two arithmetic operations on parameters are used to prevent musical discontinuities when applying a set of relationships from one patch to another.
  • FIG.l is a block diagram of a MIDI embodiment of the device.
  • FIG.2 is a detailed block diagram of performance controller 6.
  • FIG. 3 is a block diagram of a embodiment of the performance controller embedded into the sound generating device.
  • FIG.4 is a diagram relating the relationship between the modification of voices parameters of a DX7 synthesizer and the attendant musical parameters.
  • FIG.5 is a flow diagram of the operation of the device in a performance mode.
  • FIG.6 shows the equations and effect of the integrate mode on a parameter.
  • DETAILED DESOUPTION OF A PREFERRED EMBODIMENT The present device is intended as a user manipulable device for dynamically "performing" a composition by sending a message sequence defining the sounds thereof.
  • the device is differentiated from the present art, by controlling the parameters known as voice parameters, of a music generating device, thus creating a performance dynamically, on an instant to instant basis.
  • voice parameters are iden ⁇ tified by the requirement to use System Exclusive messages to access and modify the parameters.
  • a representative MIDI SysEx message is illustrated in Figure 1.
  • the parameters may be a manufacturer-proprietaiy parame ⁇ ter not having a generally recognized meaning outside the processing performed by a particular brand of musical instrument.
  • a parameter and its associated value are shown as a pair of 8-bit words - i.e., a number between 0 and 127 specifies a parameter, and a second number between 0 - 127 specifies the value of that parameter.
  • the exact form of the parameter and value varies from manufacturer to manufacturer.
  • the MIDI messages appear as a serial bitstream in the communication line with an asynchronous message frequency of 31.250 KHz.
  • the performance controller 6 is connected to one or more sound generating or MIDI controlled devices 3 by means of a communica ⁇ tion line 5.
  • the performance controller is illustrated as having a plurality of slide switches or faders 7 and switches 8.
  • the performer controls the device using the element 7, which, for ease of discussion, is referred to simply as "sliders”.
  • Switches 8 are used to select 1 of a number of embedded performances.
  • a merge 10 which combines in a non interfering way, two MIDI data streams.
  • the merge 10 is a standard MIDI device produced by a number of manufacturers.
  • the merge may also be included integrally with the per ⁇ formance controller.
  • the merge facilitates reception by the performance controller
  • the data from the music generating device is used mainly for OFFSET and INTEGRATE modes of operation as explained more fully below, but may also contain general performance setup data (known as patch change messages) or con ⁇ tinuous controller data.
  • the MIDI line 4 may contain note on/ note off data, con ⁇ tinuous controller data, or other general control information.
  • FIG. 2 shows a prototype construction of an embodiment of a performance controller 6.
  • a processor 12 is connected in communication via a MIDI interface 5 with a line carrying MIDI control messages, and is also connected to an input device
  • conditioning circuitry 16 which as described further below, allows a user to indicate the desired performance characteristics at any instant in time by moving conventional instrument control devices referred to herein as "sliders", which may be faders, foot pedals or the like.
  • a memory 15 performs the several functions of initializing the microprocessor, synchronizing the operation of the inputs and outputs of the device, and containing multiple sets of performance data, described more fully below.
  • the switches 8 allow the performer to select 1 set, from multiple sets of performance data, for a particular performance.
  • the selector switch 9 allows the performer to select the mode of operation of the performance controller. Three modes of opera ⁇ tion are described, ST.AND.ARD PERFORM, OFFSET, and INTEGRATE.
  • Figure 3 depicts an embodiment of the device wherein the performance con ⁇ troller is contained within a synthesizer or other music generating device. This em ⁇ bodiment applies the voice parameter data directly to the music generating device and receives performance and control information over a MIDI communication link. In other ways the two embodiments operate the same.
  • 'horn attack' may be linked, for example, to slider #1 and 'horn valves' to slider #2, thus providing a means to vary these performance values over a large range by use of the sliders.
  • Figure 4 also depicts the implementation of the graphs in lookup tables LUTs in the parameter data store portion of processor memory 15. Although the lookup tables or performance data set is shown contained in the pro ⁇ cessor PROM it will be clear to those skilled in the art, that the performance data sets may be contained in external data storage such as magnetic discs or other storage media, and loaded into the controller memory before the performance.
  • selection of a performance by depressing a switch 8, and selecting STANDARD PERFORM with selector 9, causes the processor to transmit, by way of the MIDI interface 5, to the sound generating device, a set of pa ⁇ rameters which define the initial voice (patch) for this performance.
  • the condition ⁇ ing circuitry 14 includes a multiplexer which allows the processor 11 to poll the values of the sliders 7i...7j. If a change in index of any slider or sliders is detected, the new index position of the slider or sliders is used by the processor to recover the new voice parameter values for that index.
  • the controller For each slider index position a.- of a slider (j), the controller retrieves the parameter identification p j and corresponding parameter values (v j ), associated with the slider as follows. For each occurrence of a parameter p j associated with a j ⁇ slider, the processor looks up in the table T compiled for that slider the value of p j corresponding to the current j- 1 slider posi ⁇ tion a:, denoted P jj f ⁇ ).
  • the processor determines the value of the slider. If a value change is detected, the processor looks up the values Vj, sends the new value in a SysEx message for each p j , and moves to the next slider. This series of operations continues until a new mode of operation is selected.
  • the performer may wish to apply the stored performance data set to a base voice (patch) which is a vari ⁇ ation of the base voice stored in the performance PROMS, comprising a portion the performance data store.
  • the dump or initial values of voice parameters may be different for the stored and selected voices.
  • the performance controller 6 first at 51 sends a standard MIDI message to request a data dump from the instrument 3, by way of MIDI line 5. This causes the instrument 3 (refer to FTG.l) to send out, by way of MIDI line 11 and merge 11, a stream of messages consisting of a byte identifying each active parameter and a byte identifying its current value.
  • the controller polls the MIDI interface 11 to retrieve and store these parameters and their current values. It then undertakes a loop with steps 53, 57, in which it polls a slider and looks up the parameter values corresponding to the current slider position.
  • the controller adds an offset of (60) to each LUT entry for parameter P j . This may be done by saving the dump offset value and adding it each time a P j parameter value is sent.
  • the loop 53,57 is similar to the STD PERFORM mode with the exception of the smoothing operation.
  • the OFFSET mode of operation it is possible to exceed the upper or lower limits of a parameter value. In practice, it is necessary to limit the offset to these values, thus flattened portions of the parameter curve can occur at either end of the slider position. For some metapatch 1111 programs this may be undesirable.
  • a different parameter value computation is performed at step 55b to preserve relative changes in the range of parameter values while remov- ing timbural discontinuities. In this mode of operation, the offset at the initial start ⁇ ing index ( index 4 in Figure 6 for example) is retained, but other index values are derived to cause the parameter value to approach the end values of the stored data set asymptotically effectively scaling the range in response to the initial value of the parameter.
  • This form of calculation is depicted in Figure 6.
  • This operation is per ⁇ formed at step 55b.
  • the table may be rewritten in its entirety at step 52 by performing the calculations of FIGURE 6 or performed dynamically, as the slider position is changed, as described in previous modes. If the table is so rewrit ⁇ ten, then the controller may revert to the STANDARD PERFORM mode at step 53 with the rewritten set of tables as its reference. In either case, the INTEGRATE mode generally preserves the relative direction of change and approximate mag ⁇ nitude of the parameter values, while the values never flatten out at the 0 or 127 ex ⁇ tremity as would happen if the simple numerical offset were added to each value. The resultant tonal quality is thus improved.
  • the meth ⁇ od used may be selected by the performer or the processor may use a set of decision values base on the initial voice values to make the choice.
  • a principal advantage of a controller according to the invention is its simplicity of use and universality of application.
  • the essential operation of the device entails con ⁇ struction of tables wherein the value of each position signal of a slider 7: is associa ⁇ ted with a specific value of each of one or more parameters j ) by compiling a table T j .
  • the set of tables may be replaced by sets of equations, or stored curves interrelating the parameter values.
  • MIDI messages termed 'continuous controller data' may be inter ⁇ cepted by the controller and used, in conjunction with or instead of the slider values, to derive the indices for changing the parameter values.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

A performance controller (6) for connection to a music synthesizing system (Fig. 3) which generates music in response to performance information and voice information. The performance controller comprises a plurality of adjustable voice control means (7) each having a range of voice control values, each voice adjustment means defining a voice control value capable of being adjusted by an operator during a performance. A parameter data store (Fig. 3) stores parameter data defining a plurality of voices for the music synthesizing system, each associated with one of the plurality of voice adjustment control means. In addition, a control means (16) operatively determines the voice control value of each of the voice adjustment control means and determining in response to change of the voice control value the parameter values of parameters related thereto and providing voice information to the music synthesizing system to thereby permit voice modification during a performance.

Description

PERFORMANCE CONTROLLER FOR MUSIC SYNTHESIZING SYSTEM
BACKGROUND OF THE INVENTION
The invention relates to electronic music systems, and more particularly to a control of musical devices which allows a real time variation of the musical charac¬ teristics of these devices. This invention is more specifically directed to synthesizers or computer generated music.
Advances in electronic technology has led to the development of a series of musical instruments. Musical synthesizers, electronic pianos, organs, etc. represent a few of the devices in this class of instruments. Most music today is played or genera¬ ted by this class of device. The advantage to computer generation of music lies in the extremely large range and variety of sounds that can be generated by a single in¬ strument. Along with the advantages which have been gained by the use of these in¬ struments, a major disadvantage has also resulted. The performance sound of an acoustical instrument is highly dependent on the capabilities of the artist who is per¬ forming. The same instrument and performance given by two different artists or the same artist at different times, will often produce different results due to the complex interplay between the artist and instrument itself. Acoustical instruments have ex¬ tremely complex sound producing capabilities which change from instant to instant at the whim of the artist. Computer sound generating devices lack this facility and thus tend to be played rather then performed.
Presently, sound generation devices are controlled external by means of a stream of digital data termed a MIDI stream, an acronym for "Musical Instrument Digital Interface". In such a system, a communications line is established between a sound generating unit and one or more control signal generating units, which may, for example, be proprietary electronic keyboards or specially programmed digital computers.
The communications line is a current loop along which messages are trans¬ mitted in a fixed digital format at a 31.250 KHz asynchronous message rate. All MIDI communication, except certain system exclusive communications, is achieved through multi-byte messages consisting of one Status byte followed by one or two Data bytes. A MIDI message may contain, in a specified order, a channel identifier or certain system specific data (SYS EX), and a code for a specific "parameter" fol¬ lowed by a value between 0 - 127 for the parameter. Each parameter governs an at¬ tribute, such as volume, frequency, envelope attach or decay, or the like which con¬ trols the generation, shaping or modulation of an electrical drive signal provided to a sound transducer. Reference is made to the IMA MIDI Specification 1.0 for a more detailed description of the format of MIDI control signals. For purposes of this discussion, however, it is sufficient to observe that any single note or chord in an electronic music composition will in general be specified by a great number of pa- rameters and their corresponding values occurring in a sequence of MIDI messages and that each parameter and its value are present as digital words in the MIDI con¬ trol system. A number of these parameters will be system exclusive parameters which control the generation of a sound by a particular instrument Unlike the stan¬ dardized messages used for the generation of notes and the more universal aspects of tonal quality, these SYS EX messages may contain any number of Data bytes, and are terminated by an End of Exclusive (EOX) or any other Status byte. These mes¬ sages include a Manufacturers Identification (ID) Code, which enables specific ma¬ chines of that manufacture to recognize the ensuing message data.
In the past many devices have been proposed to control the real time charac¬ teristics of electronic music generation devices to more nearly approach the per¬ formance characteristics of acoustical instruments. These devices are mainly con¬ cerned with the control of the parameters of a device which may be termed ' per¬ formance parameters'. Performance parameters include such items as 'Pitch Bend', 'Portamento', and 'note on' and 'note off data. The standard use of these parame¬ ters is well known and they are typically adjusted by either controls contained within the synthesizer or by external control devices or computers. The MIDI (Musi¬ cal Instrument Digital Interface) specification assigns specific control functions to handle this type of real time performance function.
As an example of the prior art, Deutsch et al 4,823,667 discloses a guitar con¬ trolled instrument. The end purpose of this instrument is to provide 'note data' to a synthesizer by detecting which strings of a guitar has been played.
James A Corrigan 4,794,838 discloses another form of controller which al¬ lows the performer to change performance parameters such portamento, vibrato, or glissando effects as well as notes and pitch characteristics.
Steven C. Marshall 4,748,887 also discloses another form of guitar controller which addresses MIDI parameters. This patent, as are the rest, is mainly concerned with control of factors such as pitch bend and note data.
The disclosure of .Akio Iba 4,817,484, teaches a controller which again pro¬ vides means of detecting and transmitting MIDI performance data, and in addition provides a means to remotely (at the controller), performing what is known in the art as a patch change. Ml synthesizers have a number of selectable base voices or patches built in, which may be selected by means of switches on the device or by means of MIDI patch change messages. A voice or patch is the fixed set of all voice parameters within a synthesizer. This patent teaches on a method to produce these patch changes from the controller.
.Although performance parameters and patch changes do allow some varia¬ tion of the instrument during a performance they do not address the fundamental problem of providing the richness of sound variation which is produced by an acoustical instrument.
Computer generated music devices normally have a wide variety of sounds which can be selected by means of switches or external devices. These sounds are typically called 'voices' or 'patches'. The creation of a voice is a complex program¬ ming task which is normally done by professional musical programmers. Voices are produced by selecting and changing parameters within the device. These parameters are typically referred to as voice parameters. All computer generating musical devices have a set of these parameters regardless of the actual technique used for generating the sound. The functions of these parameters are different for FM synthesis then they are for PCM synthesis or Linear .Arithmetic synthesis. The num¬ ber of voice parameters in a synthesizer can vary from as few as 150 to over 300 depending on the synthesizer type. The interplay of these parameters when attempt¬ ing to produce a new voice is complex. A complete set of parameters for a particular voice is known in the industry as a 'patch'. The prograir-αiiing of a patch is so difficult and time consuming that most artists do not create patches, rather purchase them from companies which specialize in producing the patches. Patches (or voices) may be edited by controls on the synthesizer or by means of MIDI using special messages called system exclusive messages (Sysex). These controls or Sysex messages are normally used to provide a minor variation in an existing patch which the artist may find more to his taste. This new patch is then stored within the synthesizer and be¬ comes a new voice. Most professional artists have libraries of 1000's of patches stored in a computer. Editing or producing a patch is normally a studio or pre- performance task.
The weakness of present synthesizers in attempting to reproduce the per¬ formance characteristics of an acoustic device lies with the method of producing new voices. .An acoustic instrument, when played in a performance, produces hundreds of difference voices. The instantaneous voice of a violin, for instance, depends on the placement and pressure of the fingers on the frets, the pressure and speed of the bow, whether the bow strikes the strings or is drawn across the strings, etc. This variation of the voice characteristics of an acoustical instrument, which is highly dependent on the performer, is what provides the richness and variability of acoustical instrument performances. Thus there is a need to provide a means for producing this richness and variation of the sound characteristics in computer generated music devices.
SUMMARY OF THE INVENTION The present invention is directed to apparatus and computer software which allows a synthesizer, or other computer generated musical device, to be controlled in real time i.e. from instant to instant, in a manner more characteristic of true acoustical instruments. In a preferred embodiment of the invention, the control and method of producing the desired results is accomplished by means of a MIDI inter¬ face connection between the apparatus of this invention and one or more music gen¬ erating devices. The revelations of this invention will make it clear to one skilled in the art that the use of a MIDI interface is not a limitation on the application of the invention. It is a natural extension of this disclosure to incorporate the techniques of this invention directly in the musical device eliminating the necessity of the MIDI in¬ terface. This incorporation into the music generating device eliminates the commu¬ nication delays associated with the MIDI implementation and is contemplated as a desirable implementation.
The mechanism of the invention makes use of the voice parameters of a music generating instrument, not to edit a voice as is done in the present art, but rather to provide a complex interplay of voice parameter modification which is un¬ der control of the performing artist, on an instant to instant basis. When referred to a MIDI interface implementation, these voice parameters are differentiated from the normal performance parameters by the requirement to access these parameters by means of SysEx messages as opposed to control messages. The complex interplay of parameters may occur because there is no direct relationship between the varia¬ tion of specific voice parameters and the musical parameters with which an artist or performer normally deals. The reproduction of the musical characteristic 'bowing' for a stringed instrument, requires an entirely different set of parameters and inter¬ actions for FM synthesis than for Linear .Arithmetic synthesis. The use of this inven¬ tion provides a translation from this interplay of voice parameters, in a transparent way, into controls which are related to the musical variation normally expected of the basic starting voice or voices of the instrument or instruments. The translation, from individual parameters to a musical description for any synthesis type, is called a metapatchtm program. A patch is a set of voice parameters used to produce a specific voice within the instrument A metapatch1111 program produces a spectrum of voices with a dimemsionality equal to the number of variable controls on the ap¬ paratus. By use of the controls contained on the apparatus, the sound of the instru¬ ment can be moved through this voice space on an instant to instant basis.
From the above description, it can be seen that a metapatchto program, in a generic way, translates the operations normally performed by a specialist (the musi¬ cal programmer) to a form which the user (artist or performer) may employ intui¬ tively over a continuous range of values.
Further, to provide greater utility to the user of this apparatus, means are provided by which the performer may use a metapatch1111 program with a standard patch or voice to provide this same dimensionality in a musically meaningful way. These and other advantageous features are obtained, in a performance con¬ troller, by creating a linked relationship from a single or plurality of control devices or control signals to single or multiple voice parameters. The control devices may be of any type which produce a measure or value of a desired instantaneous rela¬ tionship between the control devices or signals and the voice parameter values.
In one form of the apparatus, the control devices may be sliders or faders which produce a voltage proportional to the position of the slider or fader. Conver¬ sion means are used to convert this voltage to a digital value which defines the posi¬ tion of the slider or fader.
In a second form of the device, the control devices my be rotary encoders which produce incremental or absolute values of the rotation angle of the encoder. The use of an incremental encoder allows the apparatus to automatically reset the apparent position of the controller to the base voice when desired.
In a MIDI implementation of the device, the control means may be any one of the above means and the control signals may be of the type known as 'continuous controller' signals.
The linked relationship between the control values and the parameters may be obtained by using look up tables (LUTs) where the look up index is the control value and the contents is the desired parameter value. This method requires a sepa¬ rate contents value for each parameter which is to be modified by a particular con¬ trol device or signal and for each index value of the control device or signal. A sec¬ ond method is to define a mathematical relationship between the control device and voice parameters. Combination of these methods may also be used.
In a MIDI embodiment of the performance controller, the output of the ap¬ paratus is connected to the MIDI IN port of the music generating computer. A MIDI IN port of the performance controller is connected to the MIDI OUT port of the music generating device and/or other MIDI devices. In one mode of operation, each time the performance controller detects a change in one of it's control devices, the voice parameter values for the new position of the control device are derived, converted to SysEx data for the receiving music generating device or devices, and transmitted over a MIDI cable to these devices. Thus all voice or other parameters of the receiving device, are changed immediately to values which are related to the position of the control device.
In an alternate or concurrent mode of operation, the performance controller also receives MIDI data by means of an IN port. This IN port may receive messages known as 'continuous controller' data from other MIDI devices. This continuous controller data is converted to a control index, the same as a control device within the apparatus, the voice parameters derived, converted to SysEx messages, and transmitted to the receiving device or devices. Thus the instantaneous value of pa¬ rameters within the receiving device or devices are changed immediately in response to control signals from other MIDI devices.
In an initial mode of operation, the performance controller may also receive, by means of it's IN port, messages known as 'dump data' which may used to determine the current values of the voice parameters within the controlled music generating device. These dump values may then be used to modify the values which relate control values to voice parameter values so that a set of relationships derive between control values and parameter values for one base voice (patch) may be used with another base voice (patch). Two arithmetic operations on parameters are used to prevent musical discontinuities when applying a set of relationships from one patch to another.
BRIEF DESCRIPTION OF THE DRAWINGS FIG.l is a block diagram of a MIDI embodiment of the device. FIG.2 is a detailed block diagram of performance controller 6. FIG. 3 is a block diagram of a embodiment of the performance controller embedded into the sound generating device.
FIG.4 is a diagram relating the relationship between the modification of voices parameters of a DX7 synthesizer and the attendant musical parameters.
FIG.5 is a flow diagram of the operation of the device in a performance mode.
FIG.6 shows the equations and effect of the integrate mode on a parameter. DETAILED DESOUPTION OF A PREFERRED EMBODIMENT The present device is intended as a user manipulable device for dynamically "performing" a composition by sending a message sequence defining the sounds thereof. In a MIDI embodiment, the device is differentiated from the present art, by controlling the parameters known as voice parameters, of a music generating device, thus creating a performance dynamically, on an instant to instant basis. In a MIDI embodiment, on current music generating devices, these voice parameters are iden¬ tified by the requirement to use System Exclusive messages to access and modify the parameters. A representative MIDI SysEx message is illustrated in Figure 1. It may include a first portion, consisting of a system-identifying portion, a second portion identifying the manufacturer and further portions which identify specific parameters and parameter values. The parameters may be a manufacturer-proprietaiy parame¬ ter not having a generally recognized meaning outside the processing performed by a particular brand of musical instrument. For purposes of illustration, a parameter and its associated value are shown as a pair of 8-bit words - i.e., a number between 0 and 127 specifies a parameter, and a second number between 0 - 127 specifies the value of that parameter. The exact form of the parameter and value varies from manufacturer to manufacturer.
The MIDI messages appear as a serial bitstream in the communication line with an asynchronous message frequency of 31.250 KHz. In a first embodiment of the device, as shown in Figure 1, the performance controller 6 is connected to one or more sound generating or MIDI controlled devices 3 by means of a communica¬ tion line 5. By way of illustration, the performance controller is illustrated as having a plurality of slide switches or faders 7 and switches 8. The performer controls the device using the element 7, which, for ease of discussion, is referred to simply as "sliders". Switches 8 are used to select 1 of a number of embedded performances. Also depicted in Figure 1 is a merge 10, which combines in a non interfering way, two MIDI data streams. The merge 10 is a standard MIDI device produced by a number of manufacturers. The merge may also be included integrally with the per¬ formance controller. The merge facilitates reception by the performance controller
6 of data from both a computer or keyboard and from the music generating device or devices. The data from the music generating device is used mainly for OFFSET and INTEGRATE modes of operation as explained more fully below, but may also contain general performance setup data (known as patch change messages) or con¬ tinuous controller data. The MIDI line 4 may contain note on/ note off data, con¬ tinuous controller data, or other general control information.
Figure 2 shows a prototype construction of an embodiment of a performance controller 6. A processor 12 is connected in communication via a MIDI interface 5 with a line carrying MIDI control messages, and is also connected to an input device
7 through conditioning circuitry 16, which as described further below, allows a user to indicate the desired performance characteristics at any instant in time by moving conventional instrument control devices referred to herein as "sliders", which may be faders, foot pedals or the like. A MIDI IN line 17, which may be the output of merge 10, or line 11 or line 4 directly, provides a means to supply external data to the per¬ formance controller. A memory 15 performs the several functions of initializing the microprocessor, synchronizing the operation of the inputs and outputs of the device, and containing multiple sets of performance data, described more fully below. The switches 8 allow the performer to select 1 set, from multiple sets of performance data, for a particular performance. The selector switch 9 allows the performer to select the mode of operation of the performance controller. Three modes of opera¬ tion are described, ST.AND.ARD PERFORM, OFFSET, and INTEGRATE.
Figure 3 depicts an embodiment of the device wherein the performance con¬ troller is contained within a synthesizer or other music generating device. This em¬ bodiment applies the voice parameter data directly to the music generating device and receives performance and control information over a MIDI communication link. In other ways the two embodiments operate the same.
The purpose of the contents of a performance data set can better be un¬ derstood by reference to Figure 4 which graphs the relationship between the musical terms 'horn attack' and 'horn valves' used in a horn performance and the voice pa¬ rameter values required to implement these musical characteristics on a Yamaha DX7 synthesizer. The voice parameter data for this explanation is represented by a 9 point curve using 9 indices from the control device, but in actual practice contains many additional points and indices. The effects of these parameter modifications would be evident to one skilled in the art of synthesizer programming if these pa¬ rameter modifications are applied to a known voice (patch). During the horn per¬ formance, 'horn attack' may be linked, for example, to slider #1 and 'horn valves' to slider #2, thus providing a means to vary these performance values over a large range by use of the sliders. Figure 4 also depicts the implementation of the graphs in lookup tables LUTs in the parameter data store portion of processor memory 15. Although the lookup tables or performance data set is shown contained in the pro¬ cessor PROM it will be clear to those skilled in the art, that the performance data sets may be contained in external data storage such as magnetic discs or other storage media, and loaded into the controller memory before the performance.
On a more detailed level, selection of a performance by depressing a switch 8, and selecting STANDARD PERFORM with selector 9, causes the processor to transmit, by way of the MIDI interface 5, to the sound generating device, a set of pa¬ rameters which define the initial voice (patch) for this performance. The condition¬ ing circuitry 14 includes a multiplexer which allows the processor 11 to poll the values of the sliders 7i...7j. If a change in index of any slider or sliders is detected, the new index position of the slider or sliders is used by the processor to recover the new voice parameter values for that index. For each slider index position a.- of a slider (j), the controller retrieves the parameter identification pj and corresponding parameter values (vj), associated with the slider as follows. For each occurrence of a parameter pj associated with a jΛ slider, the processor looks up in the table T compiled for that slider the value of pj corresponding to the current j-1 slider posi¬ tion a:, denoted Pjjfø). This value, denoted vj = Pjj(a) and the code for its parame¬ ter pj are then inserted by the processor into Sysex messages, for the particular sound generating device if pj is a voice parameter, and immediately transmits these new values to the sound generating device. Thus the voice of the sound generating device changes to match the new position of the slider or sliders. The voice of the sound generating device is therefore dynamically variable on an instant to instant basis at the whim of the performer. A flow diagram of this mode and following modes of operation is shown in Figure 5. In the STD PERFORM mode, the opera¬ tion of the performance controller 6, after transmitting the voice parameter data 59, is depicted by 60, 63. The processor, in an actual embodiment, determines the value of the slider. If a value change is detected, the processor looks up the values Vj, sends the new value in a SysEx message for each pj, and moves to the next slider. This series of operations continues until a new mode of operation is selected.
In the OFFSET or INTEGRATE mode of operation, the performer may wish to apply the stored performance data set to a base voice (patch) which is a vari¬ ation of the base voice stored in the performance PROMS, comprising a portion the performance data store. In this case, the dump or initial values of voice parameters may be different for the stored and selected voices. In the OFFSET/INTEGRATE mode 50 the performance controller 6 first at 51 sends a standard MIDI message to request a data dump from the instrument 3, by way of MIDI line 5. This causes the instrument 3 (refer to FTG.l) to send out, by way of MIDI line 11 and merge 11, a stream of messages consisting of a byte identifying each active parameter and a byte identifying its current value. At step 52, the controller polls the MIDI interface 11 to retrieve and store these parameters and their current values. It then undertakes a loop with steps 53, 57, in which it polls a slider and looks up the parameter values corresponding to the current slider position.
When a value Vj has been retrieved for parameter pj corresponding to a slider position, certain arithmetical smoothing operations are performed to make the controller derived parameter message fit smoothly with the existing parameter value. Specifically, to prevent a discontinuous jump, the parameter value difference or offset Oj is computed at step 55a. This offset is then used at step 56 to compute a new value Vj as a function of slider position which is the sum Vj + Oj, which results in a smooth transition from MIDI control to slider control. The offset is added to the LUT value, form the performance data store, each time a slider index change is detected . Thus, if the current slider value for Vj is (40) and the instrument data dump indicates a current value of (100), the controller adds an offset of (60) to each LUT entry for parameter Pj. This may be done by saving the dump offset value and adding it each time a Pj parameter value is sent. The loop 53,57 is similar to the STD PERFORM mode with the exception of the smoothing operation.
In the OFFSET mode of operation, it is possible to exceed the upper or lower limits of a parameter value. In practice, it is necessary to limit the offset to these values, thus flattened portions of the parameter curve can occur at either end of the slider position. For some metapatch1111 programs this may be undesirable. In the INTEGRATE mode, a different parameter value computation is performed at step 55b to preserve relative changes in the range of parameter values while remov- ing timbural discontinuities. In this mode of operation, the offset at the initial start¬ ing index ( index 4 in Figure 6 for example) is retained, but other index values are derived to cause the parameter value to approach the end values of the stored data set asymptotically effectively scaling the range in response to the initial value of the parameter. This form of calculation is depicted in Figure 6. This operation is per¬ formed at step 55b. For this mode, the table may be rewritten in its entirety at step 52 by performing the calculations of FIGURE 6 or performed dynamically, as the slider position is changed, as described in previous modes. If the table is so rewrit¬ ten, then the controller may revert to the STANDARD PERFORM mode at step 53 with the rewritten set of tables as its reference. In either case, the INTEGRATE mode generally preserves the relative direction of change and approximate mag¬ nitude of the parameter values, while the values never flatten out at the 0 or 127 ex¬ tremity as would happen if the simple numerical offset were added to each value. The resultant tonal quality is thus improved.
Other methods of providing the data smoothing are contemplated. The meth¬ od used may be selected by the performer or the processor may use a set of decision values base on the initial voice values to make the choice.
A principal advantage of a controller according to the invention is its simplicity of use and universality of application.
Several further variations of the embodiment described above are con¬ templated by the invention. First, the essential operation of the device entails con¬ struction of tables wherein the value of each position signal of a slider 7: is associa¬ ted with a specific value of each of one or more parameters j) by compiling a table Tj. Equivalently, the set of tables may be replaced by sets of equations, or stored curves interrelating the parameter values.
Further, MIDI messages termed 'continuous controller data' may be inter¬ cepted by the controller and used, in conjunction with or instead of the slider values, to derive the indices for changing the parameter values.
The foregoing description has been set forth by way of illustration, to enable a reader to understand and construct a performance controller operating in accor¬ dance with the invention. As such it is not exhaustive thereof, nor intended to limit the invention to the embodiments illustrated or discussed herein. The invention being thus disclosed, further modifications and variations will occur to those skilled in the art, and are included within the scope of the invention as set forth in the claims appended hereto.

Claims

1. A performance controller for connection to a music synthesizing system, said music synthesizing system generating music in response to performance information and voice nrformation, said performance controller comprising:
A a plurality of adjustable voice control means each having a range of voice control values, each voice adjustment means defining a voice control value capable of being adjusted by an operator during a performance;
B. a parameter data store for storing parameter data defining a plurality of voices for said music synthesizing system each associated with one of said plurality of voice adjustment control means, each voice being defined by at least one parame¬ ter, each parameter having a set of parameter values comprising a function of the range of voice control values; and
C. a control means for operatively determining the voice control value of each said voice adjustment control means and determining in response to change of the voice control value the parameter values of parameters related thereto and providing voice information to said music synthesizing system to thereby permit voice modification during a performance.
2. A performance controller as defined in claim 1 in which:
A each said adjustable voice control means includes means for generating a control value;
B. said parameter data store includes a look-up table memory which stores, for each adjustable voice control means, a plurality of parameter values each associ¬ ated with a control value; and
C. said control means determines when a control value generated by a said adjustable voice control means has changed, said control means in response access¬ ing said parameter data store for retrieving a parameter value associated with the changed control value and for providing the parameter value to said music synthesizing system as said voice information.
3. A performance controller as defined in claim 1 in which:
A each said adjustable voice control means includes means for generating a control value;
B. said parameter data store includes, for each adjustable voice control means, means for defining a mathematical relationship between the control value and a parameter value; and
C. said control means determines when a control value generated by a said adjustable voice control means has changed, said control means in response access¬ ing said parameter data store for retrieving a mathematical relationship defining means associated with the changed control value, generating a new parameter value in response and providing the new parameter value to said music synthesizing sys¬ tem as said voice information.
4. A performance controller as defined in claim 1 further including means for defin¬ ing a plurality of selectable operating modes, said control means further operating in response to a selected operating mode.
5. A performance controller as defined in claim 4 in which one operating mode is a standard perform operating mode, said control means, in response to the selection of said standard perform operating mode, operatively determining the voice control value of each said voice adjustment control means and determining in response to change of the voice control value the parameter values of parameters related there¬ to as said voice information.
6. A performance controller as defined in claim 4 in which one operating mode is a differential operating mode, said control means, in response to the selection of said differential operating mode, operatively determining a base voice control value and the voice control value of each said voice adjustment control means, and determin¬ ing in response to change of the voice control value and the base voice control value the parameter values of parameters related thereto, and providing the parameter value of parameters related thereto as said voice information.
7. A performance controller as defined in claim 6 in which the base voice control value is provided by said music synthesizing system.
8. A performance controller as defined in claim 6 in which the control means, in response to a base voice control value, initially determines an offset value, said con¬ trol means using the offset value in determining a parameter value in response to a change of the voice control value, said control means adding the offset value to determine the parameter value.
9. A performance controller as defined in claim 6 in which the control means, in response to a base voice control value, initially determines a scaling value, said con¬ trol means using the scaling value in determining a parameter value in response to a change of the voice control value, said control means multiplying the scaling value to determine the parameter value.
10. A performance controller for connection to a music generating system compris¬ ing a performance control means defining musical note information and music synthesizing means for generating musical sounds in response to said musical note information and voice information, said performance controller comprising:
A. a plurality of adjustable voice control means each having a range of voice control values, each voice adjustment means defining a voice control value capable of being adjusted by an operator during a performance;
B. a parameter data store for storing parameter data defining a plurality of voices for said music synthesizing system each associated with one of said plurality of voice adjustment control means, each voice being defined by at least one parame¬ ter, each parameter having a set of parameter values comprising a function of the range of voice control values;
C. a control means for operatively determining the voice control value of each said voice adjustment control means and determining in response to change of the voice control value the parameter values of parameters related thereto and providing voice information to said music synthesizing means to thereby permit voice modification during a performance; and
D. merge means for receiving musical note information from said per¬ formance control means and voice information from said control means and for providing both said musical note information and said voice information to said music synthesizing means.
11. A performance controller as defined in claim 10 in which said performance con¬ trol means also includes at least one adjustable voice control means having a range of voice control values defining a voice control value capable of being adjusted by an operator during a performance and generates performance voice control in¬ formation in response to variations thereof, said merge means receiving said per¬ formance voice control information and coupling it to said control means, said con¬ trol means using said performance voice control value in determining parameter values and providing voice information to said music synthesizing means.
12. A music generating system comprising:
A a music synthesizer for generating music in response to performance in¬ formation and voice information;
B. a performance controller comprising: i. a plurality of adjustable voice control means each having a range of voice con¬ trol values, each voice adjustment means defining a voice control value capable of being adjusted by an operator during a performance; ii. a parameter data store for storing parameter data defining a plurality of voices for said music synthesizing system each associated with one of said plurality of voice adjustment control means, each voice being defined by at least one parame¬ ter, each parameter having a set of parameter values comprising a function of the range of voice control values; and iii. a control means for operatively determining the voice control value of each said voice adjustment control means and determining in response to change of the voice control value the parameter values of parameters related thereto and provid¬ ing voice information to said music synthesizing system to thereby permit voice modification during a performance.
13. A music generating system comprising: A. a performance control means defining musical note information;
B. music synthesizing means for generating musical sounds in response to said musical note information and voice information; and
C. a performance controller comprising: i. a plurality of adjustable voice control means each having a range of voice con¬ trol values, each voice adjustment means defining a voice control value capable of being adjusted by an operator during a performance; ii. a parameter data store for storing parameter data defining a plurality of voices for said music synthesizing system each associated with one of said plurality of voice adjustment control means, each voice being defined by at least one parame¬ ter, each parameter having a set of parameter values comprising a function of the range of voice control values; ϋi. a control means for operatively deterrnining the voice control value of each said voice adjustment control means and deterrnining in response to change of the voice control value the parameter values of parameters related thereto and provid¬ ing voice information to said music synthesizing means to thereby permit voice mod¬ ification during a performance; and iv. merge means for receiving musical note information from said performance control means and voice information from said control means and for providing both said musical note information and said voice information to said music synthesizing means.
PCT/US1991/000653 1991-01-18 1991-01-18 Performance controller for music synthesizing system WO1992013334A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US1991/000653 WO1992013334A1 (en) 1991-01-18 1991-01-18 Performance controller for music synthesizing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US1991/000653 WO1992013334A1 (en) 1991-01-18 1991-01-18 Performance controller for music synthesizing system

Publications (1)

Publication Number Publication Date
WO1992013334A1 true WO1992013334A1 (en) 1992-08-06

Family

ID=22225319

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1991/000653 WO1992013334A1 (en) 1991-01-18 1991-01-18 Performance controller for music synthesizing system

Country Status (1)

Country Link
WO (1) WO1992013334A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4862783A (en) * 1987-06-26 1989-09-05 Yamaha Corporation Tone control device for an electronic musical instrument
US4890527A (en) * 1986-02-28 1990-01-02 Yamaha Corporation Mixing type tone signal generation device employing two channels generating tones based upon different parameter
US4909118A (en) * 1988-11-25 1990-03-20 Stevenson John D Real time digital additive synthesizer

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4890527A (en) * 1986-02-28 1990-01-02 Yamaha Corporation Mixing type tone signal generation device employing two channels generating tones based upon different parameter
US4862783A (en) * 1987-06-26 1989-09-05 Yamaha Corporation Tone control device for an electronic musical instrument
US4909118A (en) * 1988-11-25 1990-03-20 Stevenson John D Real time digital additive synthesizer

Similar Documents

Publication Publication Date Title
US5864080A (en) Software sound synthesis system
US5734119A (en) Method for streaming transmission of compressed music
US6369311B1 (en) Apparatus and method for generating harmony tones based on given voice signal and performance data
US6816833B1 (en) Audio signal processor with pitch and effect control
JP4679678B2 (en) Method and apparatus for formatting digital audio data
EP1638077B1 (en) Automatic rendition style determining apparatus, method and computer program
KR100245325B1 (en) Method of recording musical data and reproducing apparatus thereof
JP3117754B2 (en) Automatic accompaniment device
US7432435B2 (en) Tone synthesis apparatus and method
US4294155A (en) Electronic musical instrument
US5739454A (en) Method and device for setting or selecting a tonal characteristic using segments of excitation mechanisms and structures
US7557288B2 (en) Tone synthesis apparatus and method
JP3277844B2 (en) Automatic performance device
US6031175A (en) Music performing apparatus capable of calling registrations for performance and computer readable medium containing program therefor
US7504573B2 (en) Musical tone signal generating apparatus for generating musical tone signals
US7030312B2 (en) System and methods for changing a musical performance
JP2004078095A (en) Playing style determining device and program
WO1992013334A1 (en) Performance controller for music synthesizing system
JP3334165B2 (en) Music synthesizer
JP3141789B2 (en) Sound source system using computer software
JP3455976B2 (en) Music generator
WO2022049759A1 (en) Acoustic device, control method for acoustic device, and program
JP3830615B2 (en) Automatic performance device
JP3864784B2 (en) Electronic musical instruments and programs for electronic musical instruments
JP3324318B2 (en) Automatic performance device

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CA JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IT LU NL SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA