WO2007046311A1 - Processeur sonore et chaine audiophonique - Google Patents

Processeur sonore et chaine audiophonique Download PDF

Info

Publication number
WO2007046311A1
WO2007046311A1 PCT/JP2006/320525 JP2006320525W WO2007046311A1 WO 2007046311 A1 WO2007046311 A1 WO 2007046311A1 JP 2006320525 W JP2006320525 W JP 2006320525W WO 2007046311 A1 WO2007046311 A1 WO 2007046311A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
channel
wave data
effect
data
Prior art date
Application number
PCT/JP2006/320525
Other languages
English (en)
Japanese (ja)
Inventor
Shuhei Kato
Koichi Sano
Koichi Usami
Original Assignee
Ssd Company Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ssd Company Limited filed Critical Ssd Company Limited
Priority to JP2007540953A priority Critical patent/JP5055470B2/ja
Publication of WO2007046311A1 publication Critical patent/WO2007046311A1/fr

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K15/00Acoustics not otherwise provided for
    • G10K15/08Arrangements for producing a reverberation or echo sound
    • G10K15/12Arrangements for producing a reverberation or echo sound using electronic time-delay networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/155Musical effects
    • G10H2210/265Acoustic effect simulation, i.e. volume, spatial, resonance or reverberation effects added to a musical sound, usually by appropriate filtering or delays
    • G10H2210/281Reverberation or echo

Definitions

  • the present invention relates to a sound processor that outputs a sound effect by adding a sound effect to wave data, and a related technology.
  • the musical tone signal device disclosed in Patent Document 1 Japanese Unexamined Patent Publication No. Sho 6 3-2 6 7 9 9 9) adds a plurality of audio data reproduced by a plurality of sound channels, and adds it to the added audio data. Adds a resonance effect.
  • the musical sound signal generator is provided with a plurality of audio data and effects reproduced by a plurality of sound channels; it adds the B audio data and outputs it to the D / A converter.
  • the resonance effect is added using the sound data from all the sound channels as the original sound, and the sound channel to which the original sound to which the resonance effect is added cannot be arbitrarily set.
  • the sound channel for assigning additional audio data with a co-effect added is fixed and cannot be set arbitrarily.
  • wave data original sound
  • wave data to which an effect is added is assigned to an arbitrary logical sound channel.
  • the purpose is to assign all available sound processors and related technologies. Disclosure of the invention
  • a sound processor for mixing wave data reproduced by a plurality of logical sound channels and outputting from one or a plurality of physical sound channels.
  • a sound processor for mixing wave data reproduced by a plurality of logical sound channels and outputting from one or a plurality of physical sound channels.
  • Channel setting means to set one of the three attributes, such as non-feature channel, and one or more filfS logical sound channels are set to fB FX original sound channel ⁇ , ⁇ Playback by effect original sound channel Wave data To tins prescribed Sound effects assigned to the fiifB effect channel; ⁇ measuring means, and
  • the data channel setting means can be arbitrarily set to the MIE logical sound channel even with the attribute of the level among the three attributes of tillE. .
  • the effect original sound channel can be set to any logical sound channel
  • the wave data (original sound) reproduced on any logical sound channel among the wave data reproduced on multiple logical sound channels can be set.
  • multiple logical sound channels are ffi, and the effect channel can be set to any logical sound channel, so that wave data with sound effects can be played back on any logical sound channel.
  • all logical sound channels can be set to non-jet channels and assigned for the reproduction of wave data without sound effects.
  • any attribute can be set for any logical sound channel in this way, for example, the logical sound channel assigned to the song playback is set as the effect original sound channel, and the wave data being played back there In addition to applying acoustic effects such as echo to the sound, the logical sound channel assigned to the sound effect playback is set as a non-effect channel, and the wave data being played there is not combined. realizable.
  • the function of performing pitch conversion, amplitude modulation, etc. on the reproduced wave data is provided in the logical sound channel: ⁇ The same function can be used for the wave data assigned to the effect channel.
  • the sound effect is stored in the buffer with the wave data reproduced by the Kagami Fect original sound channel, and is retrieved and played back after a predetermined time. Assign to an effect channel.
  • the delay function can be easily constructed by extracting the wave data from the buffer after a predetermined time.
  • the tilfE sound effecting means is an addition means for adding wave data reproduced by the same number of original sound channels, even though multiple ftiilS logic sound channels are set as the tins effect original sound channels.
  • the tiris ⁇ w effect means includes the ttna3 ⁇ 4n calculation result wave data in the ⁇ buffer, retrieves the tins after a predetermined time and plays them back, and assigns them to the tins Efate channel.
  • the buffer is stored for one channel, so the capacity of the buffer is reduced. I can control.
  • This sound processor further comprises amplitude modulation means for performing amplitude modulation on the wave data at an arbitrary rate independently of the E logic sound channel ⁇ and independent of the tins physical sound channel ⁇ .
  • the tfHE sound effecting means calculates the average value of triB ratio of amplitude modulation of tins multiple physical sound channels for the original sound channel channel, which is a ⁇ logic sound channel, and the tfilS effect channel channel ⁇ It further includes an averaging means for multiplying the wave data by an average value, and the ⁇ ⁇ calculation means adds the wave data that is a multiplication result by the averaging means.
  • the mixing balance of logical sound channels is different for each physical sound channel: ⁇
  • the buffer capacity can be reduced.
  • the knitting sound effecting means includes: an extracting means for extracting a plurality of bits of a predetermined number of bits from the accumulated wave data that is a result of the calorie calculation by the ⁇ calculating means;
  • the wave data that is the addition result by the previous ⁇ calculation means further includes a saturation processing means that replaces a plurality of bits extracted by the ⁇ ⁇ tilt self-extraction means that exceeds the range that can be expressed by a number.
  • the effect sizer means that the wave data represented by the multiple bits of the edited data extracted by the extraction means is used as it is in the Kamiki buffer, and , Hatefulness Exceeding the range that can be expressed by the predetermined number of bits, ⁇ sound effect means tiilE saturation processing hand
  • the wave data represented by multiple bits is stored in the 15 buffer in the 15 buffers, and the sound effect means depends on whether it exceeds the range that can be expressed by the predetermined number of bits.
  • the wave zatter stored in the fiilE buffer is taken out after a predetermined time and played, and assigned to the tiflS effect channel. '
  • the iE extraction means extracts a plurality of bits of tins having a constant number of bits from the tii! B cumulative wave data
  • the IS saturation processing means includes the Kamai cumulative wave data. All the bits that are higher than the extracted fiifB multiple bits and not included in the l! E multiple bits do not show the same value as the most significant bit of the extracted multiple bits.
  • ttilE Replace multiple bits with 3 ⁇ 4 ⁇ predetermined value.
  • the reverb function can be easily realized.
  • the sound processor further includes a digital Z analog conversion means for converting a digital signal into an analog signal, and the wave data processed by the logical sound channel is PCM data, and ttJlE digital Z analog
  • the conversion means converts MIBP CM data into an analog signal.
  • the buffer necessary for the processing for adding the sound effect (for example, «processing, etc.) is provided by means such as RAM. Easy to build with.
  • processing for imparting an acoustic effect (for example, amplitude modulation processing, etc.) can be performed with a small logic circuit and elaborate or small software. .
  • This sound processor further comprises a time division multi-purpose output means for time-division multiplexing the wave data reproduced by the plurality of 15 logical sound channels and outputting to the digital / analog conversion means.
  • the time division multiplex output means outputs the wave data reproduced by the effect channel at the time allotted to the effect channel and other subsequent selfish logic sound.
  • filfE is output instead of the wave data of other logical sound channels.
  • the wave data reproduced by the Efate channel can be assigned to the output time of other logical sound channels other than the effect channel in the% output, so it can be set with a simple circuit. It is possible to give volume data that exceeds the upper limit to wave data with effects.
  • the sound system mixes the wave data reproduced by a plurality of logical sound channels and outputs it from a plurality of physical sound channels.
  • a sound processor an analog Z digital unit that converts an externally input analog audio signal to a digital audio signal, and an arithmetic processing unit that performs arithmetic processing according to a program.
  • Each of the hate logic sound channels has a wave data power of ⁇ that gives the sound effect s
  • the effect original sound channel that is assigned the first predetermined sound effect S the first effect channel to which the assigned wave data is assigned, 1 of 4 attributes: 2nd effect channel to which wave data to which a second predetermined sound effect is assigned is assigned, and ⁇ Effect power S Wave data force to which S wave is not assigned S Non-effect channel to be assigned
  • Channel setting means for setting one and one or more logical logic The channel is set to the SiilE effect original sound channel ⁇ , wave data played by the original sound channel is placed in the first buffer, taken out after the first predetermined time and played back, and the first The first sound effect assigning means to be assigned to the ethet channel, and the channel setting means can arbitrarily set any of the four attributes to the iff self logical sound channel.
  • the arithmetic processing unit stores the dislike digital signal obtained by the analog / digital conversion unit in the second buffer, and the tfiia sound processor receives the self digital signal obtained by the arithmetic processing unit in the tirlE second buffer.
  • the second buffer is taken out and played after the second predetermined time since it was stored in the second buffer. Further comprising a given device; a second acoustic effect to be assigned to the tunnel.
  • the effect original sound channel can be set to any logical sound channel
  • the wave data source that is played back on any logical sound channel among the wave data played back on multiple logical sound channels Sound can be given ⁇ effects (eg echo // reverb etc.).
  • ⁇ effects eg echo // reverb etc.
  • multiple logical sound channels are and the first and second effect channels can be set to any logical sound channel
  • ⁇ Wave data with effects can be set to any logical sound channel. Can play.
  • all logical sound channels can be set as non-effect channels and assigned for the reproduction of wave data without sound effects.
  • analog audio signals input from outside can be played back with a sound effect after being converted to digital audio signals.
  • a part of the processing for imparting the acoustic effect is performed by the arithmetic processing unit (that is, software) force S, so that a configuration for imparting the effect can be arbitrarily constructed.
  • the first effect means is in the fiilS first buffer. Then, amplitude modulation is applied to the wave data retrieved and reproduced after the first predetermined time at a first predetermined rate. ⁇ ⁇ Width modulation means and na for reproduction after the first predetermined time.
  • Wave data that has been amplitude-modulated at the first predetermined rate is added to the wave data stored at the first predetermined position displaced from the wave data read position by force.
  • the calculation processing means includes a second predetermined processing unit for wave data extracted and reproduced after the second predetermined time after the conversion to the second buffer. The wave data is moved to a second predetermined position displaced from the read position of the wave data from the second buffer for reproduction after a second predetermined time in Stif. Is added to the wave data .
  • the reverb function can be easily realized.
  • the sound processor further includes a digital Z analog conversion means for converting a digital code to an analog signal, and the wave data processed by the Fujimi logic sound channel is PCM data, What is the hateful digital / analog conversion method?
  • the ftllS sound processor converts time-division multiplexed wave data reproduced by multiple logical sound channels and outputs it to the digital Z analog conversion means. It further includes multiple output means, and the time division multiple output means outputs the wave data reproduced by the self-adjusted first effect channel at the time allotted to the first effect channel and other subsequent signals.
  • the wave data output by the second effect channel is output in place of the wave data of the other logic sound channel, and the wave data reproduced by the second effect channel is 2 Output at the time allotted to the Taechan channel, and other selfish logic that follows The time allocated to Undochi Yan'neru outputs instead of the wave data ⁇ other logical Sau command channel.
  • the wave data and the second effect reproduced by the first effect channel are output at the output time of the logical sound channel other than the first effect channel and the second effect channel. Since the wave data reproduced by the channel can be assigned, the volume exceeding the upper limit that can be originally set by a simple circuit can be given to the wave data to which the acoustic effect is given.
  • FIG. 1 is a block diagram showing an internal configuration of the multimedia processor 1 in the embodiment of the present invention.
  • Fig. 2 is an explanatory diagram of a general echo model.
  • Figure 3 is an explanatory view of an echo model in the embodiment of the present invention ri '
  • FIG. 4 is a conceptual diagram of the echo function in the embodiment of the present invention.
  • FIG. 5 is a conceptual diagram of the microphone echo function in the embodiment of the present invention.
  • FIG. 6 is a conceptual diagram of a logical sound channel and a physical sound channel according to the embodiment of the present invention.
  • FIG. 7 is a block diagram showing the internal configuration of the SPU 13 of FIG.
  • FIG. 8 is an explanatory diagram of time division multiplexing by the data processing block 35 of FIG.
  • FIG. 9 is an explanatory diagram of an echo F I FO buffer and a microphone echo F I FO buffer configured in the main RAM 25 of FIG.
  • FIG. 10 is a block diagram showing an internal configuration of the echo function block 55 of FIG.
  • FIG. 11 is a block diagram showing an internal configuration of the microphone echo function block 57 of FIG.
  • FIG. 12 is an exemplary view showing the output of wave data as echo components and wave data as microphone echo components in the embodiment of the present invention. '' Best mode for carrying out the invention
  • FIG. 1 is a block diagram showing the internal configuration of the multimedia processor 1 according to the embodiment of the present invention.
  • this multimedia processor 1 includes an external memory interface 3, a DMAC (direct memory access controller) 4, a central processing unit 3 (hereinafter referred to as “CPU”) 5, CPU low power RAM7, rendering processing unit (hereinafter referred to as “RPU”) 9, color palette RAMI 1, sound processing unit (hereinafter referred to as “SPU”) 13, SPU local RAMI 5, geometry Tori engine (hereinafter referred to as “GE”) 17, Y sorting unit (hereinafter referred to as “YSU”) 19, external interface block 21, main RAM access arbiter 23, main R AM25, I / O bus 27, video DAC (digital to analog converter) 29, audio DAC block 31, and AZD converter (hereinafter referred to as “ADC”) 33.
  • main RAM access arbiter 23 main R AM25, I / O bus 27, video DAC (digital to analog converter) 29, audio DAC block 31, and AZD converter (hereinafter
  • the CPU 5 executes programs stored in the memory MEM to perform various calculations and control the entire system.
  • the CPU 5 can also send a program and data transmission request to the DMA C4, or fetch the program code directly from the external memory 50 and directly access the data to the external memory 50 without going through the DMAC4. it can.
  • Bus 27 is a system control bus with CPU 5 as the bus master, and each functional unit (external memory interface 3, DMAC4, RPU9, SPU13, GE17, YSU19, external interface block 21, and so on) is a bus slave. Used to access ADC33) control register 'and local RAM7,11,15. In this way, these functional units are controlled by the CPU 5 through the 10 bus 27.
  • the CPU local RAM 7 is a RAM dedicated to the CPU 5, and is used as a stack area for saving data at the time of a subroutine cone interrupt, a storage area for variables handled only by the CPU 5, and the like. ,
  • RPU9 makes a 3D image composed of polygons and sprites in real time. Specifically, the RPU 9 reads each structure instance of the polygon structure self-array and the structure instance of the sprite structure array sorted by YSU 19 from the main RAM 25, and performs predetermined processing in purple. Then, an image is generated for each horizontal line according to the screen (display screen) scan. The generated image is converted into a data stream indicating the composite video signal waveform and output to the video DAC 29.
  • the RPU 9 also has a function of making a DMA transfer request to the DMAC 4 for capturing polygon and sprite texture pattern data.
  • Texture pattern data is two-dimensional pixel array data that is pasted on a polygon or sprite. Each pixel data is part of the information for specifying an entry in the color palette RAMI 1.
  • the pixels of the texture pattern data are referred to as “texels”, and the pixels that make up the image displayed on the screen are distinguished from “pixels”. Therefore, the texture pattern data is a set of techsenators.
  • the polygon structure array is a structure array for polygons that are polygonal graphic elements
  • the sprite structure array is a splice that is a rectangular graphic element parallel to the screen.
  • Structure array for The element of the polygon structure K column is called “polygon structure instance”, and the element of the sprite structure array is called “sprite structure instance”. However, if there is no need to distinguish between the two, it may simply be called “Structure Instance J.
  • Each Polygo i structure instance in the polygon structure array is displayed on the display information for each polygon (on the screen). Top / ⁇ mark, information on texture pattern in texture mapping mode, and color data in RGB color component (including RGB color component).
  • Each sprite structure instance and instance in the sprite structure transversion sequence is a display for each sprite '
  • Video DAC 29 is a digital / reg / analog generator for generating analog video signals.
  • the video DAC 29 converts the data stream input from the RPU 9 into an analog composite video signal, and outputs it to a video signal output terminal ("tirf" in the figure) or a television monitor (not shown in the figure).
  • the color palette RAMI 1 is composed of a color palette of 512 colors, that is, 512 entries.
  • RPU9 uses the texture data included in the texture pattern data as part of the index that specifies the entries in the color palette, and refers to the color palette RAMI 1 to convert the texture pattern data into color data (RGB color component). Convert to.
  • SPU 13 which is one of! ⁇ Of the present invention is PCM (pu 1 seco de modu 1 ation) waveform data (hereinafter referred to as “wave data”), amplitude data, and main volume. Generate data. Specifically, the SPU 13 time-division multiplexes the wave data for up to 64 channels, generates envelope data for up to 64 channels, multiplies it with the channel volume data, and time-divisions the amplitude data. Multiplex. Then, the SPU 13 outputs the main volume data, time-division multiplexed wave data, and time-division multiplexed amplitude data to the audio DAC block 31. Also, SPU13 receives wave data and envelope data from DMAC4. Details of the SPU 13 will be described later.
  • the audio DAC block 31 which is one of the aspects of the present invention, converts the wave data, the amplitude data, and the main volume data input from the SPU 13 into analog signals, respectively, and analog-multiplies the results. Generate an analog audio signal.
  • This analog One audio signal is output from an audio signal output terminal ( ⁇ rf in the figure) to an audio input terminal (not shown) of a television monitor or the like (indicated by 3 ⁇ 4rf).
  • the SPU local RAM I 5 stores parameters used when the SPU 13 performs wave reproduction and envelope generation (for example, address pitch information of wave data and envelope data).
  • G E 1 7 uses an apologetic operation to display 3D images.
  • GE 1 7 is a matrix product, vector affine transformation, vector orthogonal transformation, Fujisaki transformation, vertex lightness / polygon lightness calculation (vector inner product), and polygon back force ring processing (vector outer product). Make the calculation purple.
  • YSU 19 stores each structure instance of the polygon structure stored in main RAM 25 and each structure instance of the sprite structure array according to sort / toll 1 to 4 To do.
  • the ⁇ ⁇ polygon structure array and sprite structure array are sorted separately.
  • a two-dimensional coordinate system used for actual display on a display device such as a television monitor is called a screen coordinate system.
  • the screen coordinate system is composed of two-dimensional 'Pixeno H self-sequences of horizontal, direction 20 48 pixels, and vertical direction 10 24 pixels.
  • the coordinate origin is at the upper left, the right direction corresponds to the positive X axis, and the lower direction corresponds to the positive Y axis.
  • the actual displayed area is a part of the screen coordinate system, not the entire space. This display area is called a screen.
  • the Y coordinate in Sort Lunole 1-4 is a value in the screen coordinate system.
  • Sort 1 is to arrange each polygon structure instance in order from the smallest Y coordinate.
  • the smallest Y coordinate is the smallest Y coordinate among the Y coordinates of all the vertices of the polygon.
  • Sort 2 is to arrange the structure instances of polygons in order of increasing depth value for multiple polygons with the same minimum Y coordinate '. Drawn on.
  • YSU 1 9 considers multiple polygons with pixels displayed in the first line of the screen to be the same even if the minimum Y coordinate is different. Instead, it sorts each polygon structure instance according to sort 2. In other words, if there are multiple polygons with pixels displayed on the first line of the screen, the minimum Y coordinate is assumed to be the same, and the depth values are arranged in descending order. This is Sotonorail 3. Even with interlaced scanning: ⁇ , sorts 1 through 3 apply. However, in sorting to display the field, the minimum Y coordinate of the polygon displayed on the odd line and the minimum Y coordinate of the polygon displayed on the even line immediately before Z or the odd line are the same.
  • Sortle 2 assumes that the minimum Y coordinate of the polygon displayed on the even line and / or the minimum Y coordinate of the polygon displayed on the odd line before the even line are the same. This is Sort No4.
  • Sort Nos. 1-4 for sprites are the same as Sort Nos. I ⁇ "1-4 for polygons, respectively.
  • the external memory interface 3 is responsible for reading data from the external memory 50 and writing data to the external memory 50 via the external bus 51.
  • step 3 according to the EBI priority table (not shown), external bus access request factors from CPU 5 and DMAC 4 (factors requesting access to external bus 51) are arbitrated, and either one of the external Select the bus access request factor. Then, the access to the external bus 51 is permitted for the selected external bus access request factor.
  • E B I priority injection position table multiple from CPU 5
  • the DMAC 4 performs DMA transfer between the main RAM 25 and the external memory 50 connected to the external bus 51. This 3 ⁇ 4 ⁇ , DMAC4, according to the table about DMA excellent H not shown,
  • DMA request factor Arbitrate the DMA request factor from CPU5, RPU9, and SPU13 (factor that requires DMA) and select one DMA request factor. Then, a DMA request is made to the external memory interface 3.
  • DMA excellent around 5HI table is CPU5, RPU9, and
  • the DMA request factors of SPU13 are (1) to make wave data into wave buffer and (2) to make envelope data into envelope buffer.
  • the wave buffer and the envelope buffer are temporary areas of wave data and envelope data set on the main RAM 25, respectively. Note that two DMA requests for SPU13 The mediation between the factors is performed by hardware (not shown) in SPU13, and DMA C 4 is not aware. '
  • the texture buffer is a temporary storage area for texture pattern data set on the main RAM 25.
  • the CPU5 DMA request factors are: (1) A page miss occurred due to memory management: ⁇ Page transfer, (2) Data transfer requested by application program, etc. If multiple DM A3 ⁇ 43 ⁇ 4 requests occur at the same time in CPU.5, the mediation is performed by the software that is ffed by CPU5, and DMAC4 is not informed.
  • the external interface block 21 is an interface with the peripheral device 54 and includes 24 channels of programmable digital input / output (I / O) ports.
  • Each of the 24-channel IZO ports has a mouse interface function for 4 channels, a light gun interface function for 4 channels, a general-purpose timer counter for 2 channels, an asynchronous serial interface function for 1 channel, and 1 channel. It is internally connected to one or more of the general purpose parallel ⁇ serial conversion port functions.
  • the ADC 33 is connected to the 4-channel analog input port, and through these, the analog signal input from the analog / input device 52 is converted into a digital signal. For example, analog input signals such as microphone sound are sampled and converted to digital notators.
  • the main RAM access arbiter 23 arbitrates access requests to the main RA1VI25 from the functional units (CPU5, RPU9, GE17, YSU 19, DMAC 4, and external interface block 21 (general-purpose parallel Z serial conversion port)). Give access permission to any functional unit. '
  • the main RAM 25 is used as a work area for CPU5, a variable area, tKKi, a storage management area, and the like.
  • the main RAM 25 is a data area received by the CPU 5 from other functional units, a TO area for data acquired by the RPU 9 and SPU 13 from the external memory 50 by DMA, an input data area and an output data area for the GE 17 and YSU 19, etc. Also used as
  • the external bus 51 is a bus for accessing the external memory 50.
  • CPU5 and DMA are a bus for accessing the external memory 50.
  • the data bus of external bus 51 consists of 16 bits, 8 bits or 16 bits An external memory 50 having a data bus width of 5 can be connected. External memories with different data bus widths can be connected at the same time, and the function of automatically switching the data bus width according to the external memory to be accessed is provided.
  • an echo is a reflected sound that can be clearly discriminated from the original sound that is generated in a certain sound force s on a wall or ceiling
  • a reverb is a reverberant sound that cannot be distinguished from each individual sound.
  • Figure 2 refers to is an illustration of a typical echo model.
  • Figure 2 is a general model of the echo ⁇ beauty Ripabu there is illustrated in a music hall or the like.
  • initial « The time is determined by the distance between the sound source and the wall.
  • the initial ⁇ f sound is generally called an echo, and is a sound that is heard with a delay in the original sound.
  • the size and timbre of the initial reflected sound vary depending on the size and material of the reflection wall
  • the remaining # ⁇ is generally called a reverb, and the original sound and reflection sound are on the ceiling, floor, auditorium, For various objects such as the audience Normally, it is not possible to determine the power of the original sound by just listening to the remaining sound
  • the volume, tone, and time of the reverberant sound are various factors such as the volume, shape, and material of the hall. It depends on.
  • echo and reverberation are intentionally performed with a simple model. Reproduce.
  • an echo sound model is provided in a unified manner for echo and reverb. The function that realizes this model is called the “echo function”.
  • FIG. 3 is an explanatory diagram of an echo model according to the embodiment of the present invention.
  • the echo »time is created by an echo F I F O buffer configured on the main AM 25.
  • the sum of the wave data reproduced by the logical sound channel specified as the echo source that is, the wave data force S that is given the effect by the echo function
  • Delay is generated by retrieving and playing back the wave data (summation) after a certain period of time.
  • the logical sound channel specified for the echo source is the “Efate original channel”
  • the wave data as the echo component that is, the wave data to which the effect is given by the echo function
  • the logical sound channel to be assigned is the “effect” Channels
  • logical sound channels to which wave data as normal sound components that is, wave data to which no effect is applied by the echo function
  • non-effect channels a sound channel corresponding to an object
  • a physical sound channel is clearly distinguished from a logical sound channel that is a sound channel that reproduces wave data output to the physical sound channel.
  • sound channel # ⁇ means a logical sound channel.
  • two physical sound channels are set on the left and right. Therefore, the physical sound channel that receives the left audio signal (left wave data) is called the “left channel”, and the physical sound channel that plays the right audio signal (right wave data) is called the “right channel”.
  • the reversal time is also created by the echo F I F O buffer configured on the main RAM 25.
  • wave data reproduced over time is repeatedly reproduced while reproducing the reverberation.
  • the waveform data shown in the figure is created by multiplying the wave data reproduced by the Efate channel by the echo release rate ER and adding the multiplication result to the wave data stored in the echo FIFO buffer.
  • the echo and reverb for the audio data input from AD C 33 is also supported by hardware.
  • This function is called a microphone echo function.
  • the microphone echo function configures a microphone echo FIFO buffer for «on the main RAM 25.
  • the computation required for reverb is performed with 5 CPUs.
  • the microphone echo function 3 ⁇ 4 ⁇ can be used to reproduce the microphone echo component from 48 sound channels since the number of possible sound channels is limited to 48 or less. It is necessary to set one effect channel.
  • both the echo function and the microphone echo function can be used, so this: ⁇ sets the fault channel for each.
  • FIG. 4 is a schematic diagram of the echo function in the embodiment of the present invention. Referring to Fig.
  • the echo function consists of echo blocks ⁇ 0 to ⁇ ( ⁇ — 1), difficulty A p, funnel shifter // saturation processing circuit FS, echo FI FO buffer B a, Bb, difficulty A 1, And multiplication ⁇ ⁇ Including M1.
  • Each of the echo blocks EB0 to EB (N-1) includes a multiplier Mp and a switch SW.
  • the echo blocks EB0 to EB (N-1) include the amplitude left and right average value calculation units AM 0 to AM (N-1) of the corresponding sound channels # 0 to (N-1).
  • wave data W0 to W (N-1) of sound channel # 0 to (N—1), left amplified data AML0 to AM (N—1), and extended data AMR0 to AMR (N -1) Force Input to the corresponding echo block EB0 to EB (N-1).
  • the average value of the left amplified data AL0 to AL (N-1) and the right amplified data AR 0 to AR (N-1) is calculated by the amplitude left and right average value calculation units AM 0 to AM (N-1). It is calculated in the echo blocks EB0 to EB (N-1) as the left and right average values ALR0 to ALR (N-1). “N” is difficult and can range from 1 to 48.
  • wave data W0 to W (N-1) of sound channel # 0 to # (N—1), echo block EB0 to EB (N-1). ) And amplitude left and right average values ALR0 to ALR (N-1) are 1 , and when they are expressed comprehensively, the wave data Wn of sound channel #n, echo block EBn, and sound channel #n, respectively And amplitude left and right average value A LRn.
  • any sound channel from among sound channels # 0 to # (N—1) can be set as the original source channel.
  • the sound channel #n is set as the effect original sound channel.
  • the wave data that is the echo source is the multiplication result E Mn of the wave data Wn of the sound channel #n set for the effect original sound channel and the left and right amplitude average value A L Rn. That is, the wave data Wn of the sound channel #n set as the effect original sound channel is multiplied by the amplitude left-right average value A L R n and the force echo block E B n difficulty Mp, and the multiplication result EMn is obtained.
  • the switch SW of the echo block EBn is turned on when the sound channel #n is set to the original source channel 3 ⁇ 4 ⁇ , so the multiplication result calculated for each sound channel #n
  • EMn can be added only to the sound channel #n set as the effect original sound channel.
  • the total sum ⁇ is calculated by adding to each other by Ap. This can be expressed as ⁇ ;
  • the funnel shifter Z saturation processing circuit FS outputs 8-bit wave data to the subsequent stage.
  • the total sum exceeds the range that can be represented by 8 bits: ⁇
  • 8-bit wave data is funnel shifter / saturation processing depending on the value of the sign bit, which is the most significant bit of 8 bits. Saturated to a predetermined value by the FS saturation processing circuit.
  • the 8-bit wave data from the funnel shifter Z saturation processing circuit F S is sequentially applied to the echo FIFO buffer Ba provided in the main RAM 25 by fcA.
  • the input value is output from the effect channel as wave data with sound effects (echo and reverb) after all entries in the echo FIFO buffer Ba and Bb have passed. '
  • the echo delay time T echo ' (see Fig. 3) is the number of entries Na in the echo FI FO buffer B a and the number Nb of entries in the echo FI FO buffer Bb, and the wave data in the echo FI FO buffers B a and Bb. Can be calculated from the input frequency fecho (Hz). The following equation is used to calculate the echo delay time Te c ho (seconds).
  • the value output from the echo FIFO buffer Bb is output from the effect channel as wave data as an echo component, output to the multiplier, and multiplied by the echo release rate ER.
  • the multiplication result is added to the wave data output from the echo FIFO buffer B a by the difficulty A 1 and written to the echo FIFO buffer B b.
  • the reverb delay time Tre V erb (see Fig. 3) is calculated from the number of entries Nb in the echo FI FO buffer B b and the frequency fec ho (Hz) at which wave data is input to the echo FI FO buffers B a and Bb. it can.
  • the following equation is a formula for calculating the reverberation time ⁇ T e V erb (seconds). '
  • the microphone echo function will be explained.
  • the analog audio signal from the external analog input device 52 force such as a microphone can be converted into a digital string using the AD C 25. If this digital string is converted to PCM wave data in a format compatible with the multimedia processor 1, the converted PCM wave data can be played back as one of the sound channels. Similar to the echo function described above, this P and CM wave data can be generated to produce echo and reverb effects (microphone echo function).
  • FIG. 5 is a conceptual diagram of the microphone echo function in the embodiment of the present invention.
  • the microphone echo function includes ADC25, CPU5, microphone echo F I FO notifier MB a, MBb, adder A s, and multiplier M s.
  • the AD C 25 converts an analog audio signal from an external analog input device 52 such as a microphone into a 10-bit digital nota- tor. Since this 10-bit digital data is an unsigned value, the CPU 5 converts this digital data into an 8-bit signed value (in the range of 127 to +127). PCM wave data in a format compatible with processor 1.
  • the 8-bit PCM wave data per sample is forwarded by the CPU 5 to the microphone echo FIFO buffer MB a provided on the main RAM 25.
  • the input PCM wave data is output from the effect channel as wave data with acoustic effect (echo and reverb) force S after all entries of the microphone echo FIFO buffer MBa and TVIBb have passed. Therefore, the echo time of the microphone echo function is the same as that of the echo function.
  • Equation 1 is calculated from the total number of entries for the microphone echo FI FO buffer MBa and ⁇ VIBb and the frequency at which wave data is input to the microphone echo FI FO buffer MB a and tMIBb. it can. ,
  • the value output from the microphone echo F I FO buffer MB b is output from the effect channel as wave data, which is the microphone echo component, and is also output to the multiplication ⁇ Ms and multiplied by the microphone echo release rate.
  • the multiplication result is added by the adder As with the wave data output from the echo FIFO buffer MBa and written to the microphone echo FIFO buffer .MBb.
  • the microphone echo release rate is the same as the echo release rate ER.
  • the reverberation time of the microphone echo function is the number of entries in the microphone echo F I FO buffer MB b and the microphone echo F I FO buffer MB a and
  • FIG. 6 is a diagram illustrating a logical sound channel and a logical sound channel according to an embodiment of the present invention. However, this figure shows the configuration of the logical sound channel and the ⁇ sound channel based on human hearing, and the configuration of each part does not necessarily match the circuit implementation example in the embodiment of the present invention. Referring to Figure 6, set logical sound channels # 0- #
  • digital blocks D I B0 to D I B (K-1) are provided. “Kj is difficult, and can take a value of 6 '4 for 1 force.
  • the analog block ANB is a banule M L0-ML (K— 1) corresponding to the left channel (physical sound channel). ), Caro difficulty AD L, and multiplication ⁇ MLO, and the right channel (physical sound channel), including the multiplication ⁇ V1R0 to MR (K-1), Caro difficulty ADR, and riding difficulty MRO.
  • Difficult ML0 to ML (K-1) are digital inputs DIB 0 to DIB (K-1) input WVL 0 to WVL (K 1) and AMLO to AML (K-1), respectively.
  • Multipliers MRO to MR (K-1) receive inputs from digital blocks DIB 0 to DIB (K-1) WVR0 to WVR (K-1) and AMR0 to AMR (K-1), respectively. Receive.
  • digital blocks DI B0 to DIB (K-1) correspond to the playback units of logical sound channels # 0 to # (K-1), respectively.
  • Analog block ANB Corresponds to the converter from logical sound channel to physical sound channel (converter to left channel and convert to right channel).
  • the audio output signal AUL output from the analog block ANB corresponds to the left channel (physical sound channel)
  • the audio output signal AUR output from the analog block A NB corresponds to the right channel (physical sound channel). Equivalent to.
  • the logical sound channels # 0 to # (K 1) are expressed comprehensively, they are expressed as the sound channel #k and the digital block DIB 0 to DIB (K-1) is comprehensively expressed. When expressed, it is expressed as 'digital block DIB k'.
  • the multiplier ML 0 to ML (K—1) is comprehensively expressed, it is expressed as multiplication ⁇ ! MLk, and multiplication g »[R0
  • ⁇ MR (K-1) it is expressed as riding difficulty MR k.
  • the digital book D I B k performs playback processing of the corresponding logical sound channel #k.
  • Each digital block DI Bk includes switch SWz, wave buffer WBF k, effect FI FO buffer EBF, switch SWf, interpolation filter IF, switch SWs, envelope buffer EVBFk, switch SWt, and multiply ⁇ P, MPL, MPR .
  • the above-described echo F I FO buffer and microphone echo F I FO buffer are comprehensively described as an effect F I FO buffer EBF.
  • PCM wave data is an 8-bit data stream and is usually stored in the external memory 50 in advance.
  • SPU 13 starts to acquire PCM wave data from DM AC 4.
  • the DMAC 4 transfers the PCM data from the external memory 50 to the wave buffer WBFk secured on the main RAM 25.
  • the switch SWz is on.
  • SPU13 disables the acquisition of PCM wave data by DMAC 4 by turning off switch SWz when CPU 5 decompresses j £ ⁇ PCM wave data. In this case, CPU 5 writes PCM wave data to the outfitted wave buffer WBFk.
  • SPU13 replaces switch SWf with wave buffer WBFk side, and wave buffer
  • PCM wave data set to WBFk is sent to the subsequent stage as wave data Wk.
  • the logical sound channel #k is set to an effect channel that reproduces the echo component
  • the PCM wave data that is the echo component is Written to FI FO buffer EBF (Echo FI FO buffer). Since this switch SWf is exploded with the effect FIFO buffer EBF side, the PCM wave data that is the sequential echo component from this buffer EBF is sent to the subsequent stage as wave data Wk.
  • the logical sound channel #k is set to the effect channel that reproduces the microphone echo component.
  • the PC CM wave data which is the microphone echo component
  • the PCM wave data which is one miter eco component, is sent to the subsequent stage from this buffer EBF.
  • the pitch conversion is performed by changing the read value g from the wave buffer WBFk.
  • an interpolation filter IF is installed to capture the PCM wave data.
  • Interpolation power ⁇ ? The switch is performed for all logical sound channels # 0 to # (K—1) at once.
  • the interpolation ⁇ is performed on the switch SW2 force interpolation filter IF side, and the PCM wave data after interpolation is sent to the subsequent stage.
  • the switch SWs is moved directly to the switch SWf side, and PCM wave data without interpolation is sent to the subsequent stage.
  • the PCM wave data from the switch SWs is output to both the left channel, right channel, and (left and right physical sound channels) converters.
  • the PCM wave data output to the left channel converter is denoted as left wave data WVL k
  • the PCM wave data output to the right channel 'converter is denoted as right wave data WVRk.
  • the method of generating each sample of the envelope is different between the sequential mode and the release mode.
  • DMAC4 sends an envelope data string from the external memory 50 to the envelope buffer EVBFk reserved on the main RAM 25 in response to a request from the SPU 13, and SPU 13 decodes the envelope data string.
  • This ⁇ ⁇ switch SWt is assigned to envelope sample EVS k ⁇ J, and is sent to the subsequent stage as envelope sample EVS k force envelope data EVDk.
  • the multiplier MP multiplies the previous envelope data EVDk by the envelope release rate EVRk obtained by the SPU local RAMI 5 to obtain an envelope sample.
  • This ⁇ is input to the switch SWt raised to the MP side, and the envelope sample from the raised ⁇ IP is sent to the subsequent stage as the envelope EVD k.
  • Each #k is listed in SPU local RAMI 5.
  • Ride MPL multiplies envelope data EV Dk (16 bits) and gain GLk (8 bits) to generate left channel amplitude data AMLk (16 bits).
  • Ride-in MPR multiplies envelope data EVD k (16 bits) and gain GRk (8 bits) to generate right channel amplitude data AMRk (16 bits).
  • the analog block ANB multiplication ⁇ MLk is input with the left wave data WVL k of the corresponding digital block DI Bk and the left amplitude data AML k part 8 bits), and these are multiplied.
  • Ride difficulty MRk is input with the right wave data WVRk of the corresponding digital block DI Bk and the right amplified data AMRk (8 bits of the difficult part), and these are multiplied.
  • the left audio signal of logical sound channel #k obtained from the multiplication result of wave data WVL k and amplitude data AM L k is added to the left audio signal of all other logical sound channels being played back by adder ADL. Is calculated.
  • the multiplication ⁇ MLO multiplies the addition result by the 8-bit main volume data MV to generate the final left channel audio output signal AUL. .
  • the right audio signal of the logical sound channel #k obtained from the multiplication result of the wave data WVRk and the amplitude data AMRk is added to the right audio signal of all other logical sound channels being played back by the adder ADR. Is added. Then, the riding difficulty MRO multiplies the addition result by the 8-bit main volume data MV to generate the final right channel audio output signal AUR.
  • FIG. 7 is a block diagram showing the internal configuration of the SPU 13 of FIG. Referring to Figure 7, SPU1
  • 3 includes data processing block 35, comaside processing state machine 37, control register set 39,
  • the data processing block 35 includes an echo function block 55 and a microphone echo function block 57.
  • the command processing state machine 37 performs state control in command execution of the SPU 13 in accordance with a command issued by the CPU 5 through the I / O bus 27.
  • the type of command issued is
  • the command START is an instruction to start playback of the specified logical sound channel #k.
  • CPU 5 writes parameters necessary for playback in SPU local RAMI 5 in advance.
  • Coand STOP is an instruction to stop playback of the specified logical sound channel #k.
  • the command UPDATE is a command that causes some of the parameters used for playback to be obtained during the playback of the logical sound channel #k.
  • the CPU 5 writes a parameter to be Mff in advance in a corresponding control register in the SPU 13 (included in the control register set 39 described below).
  • Command NOP is a command that does not allow any processing to be performed.
  • the control register set 39 includes a plurality of control registers (not shown) for controlling the SPU 13 other than the SPU command register 38.
  • the control register set 39 includes an echo function setting register (shown as 3 ⁇ 4 ⁇ in the figure).
  • the echo setting register set to the non-effect channel (“oo”), set to the effect original sound channel (“ ⁇ ”), set to the effect channel to be used as the echo component (“ ⁇ ”), and set the effect to the microphone echo component.
  • the vertical channel 11”.
  • the logical sound channel to be refined has attributes (non-feature channel, effect original sound channel, effect channel for echo component) according to the value of the eco function setting register at that time. Therefore, the power of the effect channel for the microphone echo component can be set, so the CPU 5 can access the echo function setting register to set each logical sound channel with the offset attribute. It is also possible not to set the effect channel for the echo channel and the z or microphone echo component as the echo component for the logical sound channel.
  • the DMA requester 41 issues a DM request to the DMAC 4 in order to acquire the wave data and the envelope data from the external memory 50 according to the instruction from the command processing state machine 37. Also, the data acquired by the DMA transfer is output to the data processing block 35.
  • the data processing block 35 performs wave data acquisition and playback, envelope data acquisition and decoding, amplified data ⁇ , echo processing, and microphone echo processing. Then, the data processing block 35 divides and multiplexes the plurality of logical sound channels #k of the wave data WV L k in the left channel and outputs them to the crosstalk circuit 45 as the left channel wave data MWL. Data processing block 35 is connected to the right channel. Wave data WV R k of multiple logical sound channels #k to be time-division multiplexed and output to the crosstalk reduction circuit 45 as right channel wave data MWR.
  • the data processing block 3 5 is time-division multiplexed with multiple logical sound channels # k of amplitude data AML k to the left channel, and the left channel amplitude data MAML is used as a crosstalk circuit. 4 Data is output to 5. Data processing block 3 5 is the amplitude data of multiple logical sound channels # k for the right channel. AM R k is time-division multiplexed and the right channel amplitude data MAMR is crosstalked. Output to reduction circuit 4 5.
  • wave data of multiple logical sound channels #k can be mixed in the physical sound channel, and multiple logical values for each physical sound channel. Mixing of amplitude data of sound channel #k. This makes use of the fact that audio output that has been time-division multiplexed and digital / analog converted is audibly mixed and heard.
  • FIG. 8 is an explanatory diagram of time division multiplexing by the data processing block 35 in FIG.
  • the total number of logical sound channels # 0 to # (K—1) to be time-division multiplexed (that is, mixed) can be set from 1 to 64 in increments of 1 channel. is there.
  • the value of the number of multiplexed channels K can be set to any value from 1 to 64.
  • each logical sound channel #k is assigned to each slot that is to be multiplexed. If K-channel mixing is performed, a knop consisting of K-slots will be output.
  • the parameters in each storage area also include wave data address information, which sets the wave data to be assigned to the logical sound channel.
  • main RAM interface 4 3 is the main R A of the data processing block 3 5.
  • Access to the main RAM 25 is as follows: (1) Write the wave data and envelope data acquired by DMA transfer to the main RAM 25, (2) Wave data and envelope data for playback of the logical sound channel #k The main RA Reading from the M25, (3) Echo processing Read / write of wave data as echo components and wave data as microphone echo components to / from the main RAM 25 for 3/4 microphone echo processing.
  • the SPU low power RAM interface 47 mediates access to the SPU local R AMI 5 of the data processing block 35. Access to the SPU local RAMI 5 is performed in reading and writing parameters for wave data playback and parameters for generating envelope data and amplified data. The SPU local RAM interface 47 also mediates when the CPU 5 accesses the SPU local RAMI 5 through the IZO bus 27.
  • the audio DAC block 31 in FIG. 1 includes a cascaded DAC (1 ⁇ in the figure). As mentioned above, mixing multiple logical sound #k is time division multiplexed data.
  • time-division multiplexed data is directly input to audio DAC block 31, propagation between cascaded DACs causes crosstalk between logic sound channels #k. there is a possibility.
  • the crosstalk reduction circuit 45 in FIG. 7 prevents such a crosstalk between the logical sound channels #k.
  • This crosstalk circuit 45 is connected to each logical sound channel #k
  • the crosstalk circuit 45 inserts a silent period between the logical sound channels #k of the left channel wave data MWL and outputs the result to the audio DAC block 31 as the left channel wave data MWLC. Further, the crosstalk circuit 45 inserts a silence period between the logical sound channels #k of the right channel wave data MWR and outputs the result to the audio DAC block 31 as the right channel wave data MWRC.
  • the crosstalk ⁇ 3 ⁇ 4 ⁇ circuit 45 inserts a silence period between the left channel amplitude data MAML logical sound channel #k and the left channel amplitude data M
  • the crosstalk ⁇ S ⁇ circuit 45 inserts a silence period between the logical sound channels #k of the right channel amplitude data MAMR and outputs the audio DAC block as the right channel amplitude data MARC. 3 Outputs to 1.
  • the crosstalk circuit 45 uses the main volume data MV input from the corresponding control register (illustrated 3 ⁇ 4 to f) of the control register set 39 as the main volume data M VC as it is, and the audio DAC block 3 1 Output to.
  • FIG. 9 is an explanatory diagram of an echo F I FO buffer, a microphone echo F I FO, and a buffer configured in the main RAM 25 of FIG.
  • the area of the echo F I FO buffer can be set freely on the main RAM 25.
  • the start and end of the echo FIFO buffer area are set by the echo FIFO buffer start and end address registers 1 1 5 and 1 5 and the echo FIFO buffer and termination address register 1 17 respectively (see FIG. 10).
  • Echo F I FO buffer termination address register 1 1 5 and Echo F I FO buffer end address register 1 1 7 are set by CPU 5 through I / O bus 27.
  • the lower 3 bits of each register cannot be set, and the lower 3 bits of the echo FIFO buffer start address register 1.15 are fixed to “0 b 000”, and the echo FIFO buffer end address is set.
  • the lower 3 bits of register 1 1 7 are fixed to “0 1 1”.
  • the size of the echo FIFO buffer can be freely determined, so that a necessary and sufficient size can be secured on the main RAM 25 according to the required echo time (see Fig. 2). .
  • the area of the microphone echo F I FO buffer can be freely set on the main RAM 25.
  • the start and end of the microphone echo FIFO buffer area are set by the microphone echo FIFO buffer start address register 141 and the microphone echo FIFO buffer end address register 143 described below (see FIG. 11).
  • the values of the microphone echo F I FO buffer start address register 141 and the microphone echo F I FO buffer end address register 143 are set by the CPU 5 through the INO bus 27. However, the lower 3 bits of each register cannot be set, and the lower 3 bits of the microphone echo F I FO buffer start address register 14 1 are fixed to “0 b 000”.
  • FIG. 10 is a block diagram showing the internal configuration of the echo function block 55 in FIG. Referring to Fig. 10, the echo function block 5 5 is composed of the wave data temporary register 7 1, left amplified register: temporary register 7 3, right amplified temporary register 7 5, adder 7 7,
  • Multiplier 8 1 Addition 8 3
  • Accumulated value ⁇ ⁇ Register 8 5
  • Fanane shifter 8 7
  • Saturation processing circuit 8
  • Multiplexer (MUX) 9
  • Echo write data buffer 9 7 Echo FIF 0 buffer read address counter 9 9 9, Echo read data buffer 1 0 1, Echo FIFO buffer release address counter 1 0 3, Release read data buffer 1 0 5 Release write data buffer 1 0 7, multiplier 1 1 1, and adder 1 1 3.
  • Echo Sigma window register 9 Echo F I F O Buffer termination address register 1 1
  • Echo F I F 0 Buffer end address register 1 1 7 'and Echo release rate register 1 0 9 are included in control register set 39 in FIG.
  • the wave data temporary register 7 1, the left amplitude temporary register 7 3, and the right amplitude temporary register 7 5 are output to the audio DAC block 3 1 via the crosstalk circuit 4 5 for playback.
  • Sound channel #n ( Figure
  • wave data Wn left amplitude data AM L n
  • right amplitude data AMR n right amplitude data
  • the values AML n and AMR n in the left amplified temporary register 7 3 and the right amplified temporary register 7 5 are added to each other by the calorie difficulty 7 7, and the addition result is obtained by the divider 7 9 Divided by 2 ”, the average value ALR n of the left and right channel amplitude data is obtained.
  • Multiplier 8 1 is the average value of the left and right channel amplitude data
  • the stored value ⁇ (22 bits) in the accumulated value 3 ⁇ 4 inner register 85 is output to the funnel shifter 87 and the saturation processing circuit 89.
  • the Fubuneno shifter 8 7 selects the 8 bits in the 22-bit input value ⁇ from the accumulated value TO register 85 according to the setting value of the echo sigma window register 91, and outputs it to the MUX 93.
  • '-Saturation circuit 8 9 selects and selects 8 consecutive bits in 22-bit input value ⁇ from accumulated value storage register 85 according to the setting value of echo sigma window register 91 Extract all bits that are higher than 8 bits.
  • the 8 bits selected here are the same value as the 8 bits selected by the fune / shifter 87.
  • the saturation processing circuit 8 9 is positioned higher than the selected 8 bits of the 22-bit input value ⁇ , and all the bits not included in the selected 8 bits are selected. If the value of the sign bit of the 8-bit value (the most significant bit in the 8 bits) is not the same value (Condition 1), and the value of the selected 8-bit sign bit is “0”, “0 x “7'F”, if the value of the sign bit is “1”, “0 x 8 1” is output to MUX 93.
  • the saturation processing circuit 8 9 indicates that the selected 8-bit value is '0 x 8 0', and is positioned higher than the selected 8-bit of the input value of 22 bits. However, if all the bits not included in the selected 8 bits are “1” (condition 2), “0 x 8 1” is output to MUX 3. Further, the saturation processing circuit 89 outputs a selection signal for selecting the input value from the saturation processing circuit 89 to the MUX 93 when the condition 1 or the condition 2 is satisfied. Therefore, the MUX 93 selects the input value of the saturation processing circuit 89 and the like and outputs it to the echo data buffer 97 when the condition 1 or the condition 2 force S is satisfied. On the other hand, the condition 1 and the condition 2 and the deviation are not satisfied, 3 ⁇ 4 ⁇ i, the MUX 93 outputs the input value from the funnel shifter 87 to the echo light data buffer 97.
  • the data output from the MUX 93 is stored in the echo light data buffer 97 in increments of 1 at a time.
  • the stored data is stored in the echo FIFO nota write address counter 9 5 main RAM 2 5 address WAD (Echo FIFO buffer in Figure 9) (See below).
  • Echo FIFO buffer write address counter 95 The initial value of the echo FIFO buffer write address counter 95 is set by the CPU 5 through the control register set 39. Echo FIFO buffer write address counter 9 5 The value is incremented by 8 each time a write to main RAM 25 is performed (because it is written in 8-byte units), but the value after the increment exceeds the value set in the echo FIFO buffer end address register 117 ⁇ It is initialized to the value set in the echo FIFO buffer start address register 115 without being incremented. The value of the echo FIFO buffer write address force counter 95 can be read from the CPU 5 through the control register set 39.
  • Wave data Wi as the echo component Wi (where i is any force from 0 to (N—1)) is the address RAD in the main RAM 25 indicated by the echo FI FO buffer read address counter 99 (Eco FI FO in Fig. 9). Read from the buffer) in units of 8 bytes and stored in the 'echo read data buffer 101. The initial value of the echo FIFO buffer read address ⁇ 99 is set by the CPU 5 through the control register set 39.
  • the value of the echo FIFO buffer read address counter 99 is incremented by 1. If the value after the increment exceeds the value set in the code FIFO buffer end address register 117, the echo FIFO buffer start end is not incremented. Initialized to the value set in address register 115. The difficult value of the echo FIFO buffer read address counter 99 can be read from the CPU 5 through the control register set 39.
  • the 1-byte data output as the wave data Wi is multiplied by the echo release rate register ER leaked to the echo release rate register 109 by the riding difficulty 111.
  • the value ER of the echo release rate register 109 is set by the CPU 5 through the I / O bus 27.
  • the result of multiplication is calculated by the adder 113 with the 1-byte data stored in the release read data buffer 105.
  • the 1-byte data stored in the release read data buffer 105 is the data read from the address LAD of the main RAM 25 (refer to the echo FIFO buffer in Fig. 9) indicated by the echo FIFO buffer release address counter 103. is there. This ⁇ is read in 1-byte units.
  • Echo FIF O Buffer release address counter 103 The initial value of the echo F I FO buffer release address counter 103 is set by the CPU 5 through the control register set 39. Echo FIF O Buffer release address counter 103 value is incremented by 1. If the value after increment exceeds the value set in the echo FIFO buffer end address register 117, it is not incremented and the echo FIFO buffer start end is not incremented. Initially set to the value set in address register 115.
  • the result of addition by the difficulty 113 is sent to the release write data buffer 107 and echoed.
  • FI FO buffer release address counter 103 L Written back to AD (Echo FI FO Buffer).
  • the current value of the echo FIFO buffer release address counter 103 can be read by the CPU 5 through the control register set 39.
  • Echo F I FO buffer address WAD echo F I FO buffer read address R AD
  • echo F I FO buffer release address LAD echo F I FO buffer release address LAD
  • the value obtained by subtracting the LFO buffer read address RAD is the sum (Na + Nb) of the number of entries in the echo F.I FO buffers Ba and Bb (see Fig. 4) (see (Equation 2)).
  • the value obtained by subtracting the echo FIFO read address RAD from the echo FIFO release address LAD is the number Nb of entries in the echo FIFO buffer Bb (see (Equation 3)).
  • Released address counter 103 releases over Surido data buffer 105, release the write data buffer 107, 11, Caro difficulty 113, echo release rate register 109, start address register 115, and end address register 117 can generate the echo model shown in FIG. 3.
  • FIG. 11 is a block diagram showing an internal configuration of the microphone echo function block 57 of FIG. Figure 1
  • the microphone echo function block 57 includes a microphone echo F IF buffer read address counter 145 and a microphone echo read data buffer 147.
  • the microphone echo FIFO buffer start address register 141 and the microphone echo FIFO buffer end address register 143 are included in the control register set 39 of FIG.
  • the age of the microphone echo function used, and the original sound component of the microphone echo are usually written directly from the CPU 5 to the microphone FIFO buffer secured on the main RAM 25.
  • the wave data Wj that is the microphone echo component (j is any force from 0 to (N—1)) is the address of the main RAM 25 indicated by the microphone echo F I FO buffer read address counter 145 MRAD
  • the initial value of the microphone echo FIFO buffer read address counter 145 is set by the CPU 5 through the control register set 39.
  • Microphone echo FI FO buffer read address counter 145 value is incremented by 1 ⁇
  • the value after increment exceeds the set value of Miter ECO FI FO buffer end address register 143 ⁇
  • the current value of the microphone echo FIFO buffer read address counter 145 can be read by the CPU 5 through the control register set 39. It is.
  • the microphone echo function does not support multiplication corresponding to the multiplication by the multiplier 111 and addition corresponding to the calo-calculation by 3 ⁇ 4ro ⁇ 3 ⁇ 4113 in the hardware. These operations are performed by CPU 5 (that is, software). Therefore, the echo function, write address counter 95, echo write data buffer 97,
  • the microphone echo FI FO buffer write address MWAD is generated by a write address counter having the same function as the write address counter 95 formed on the main RAM 25, and the microphone echo FI FO buffer release address MLAD is Main RAM2
  • a function is provided to output the echo component and the Z or microphone echo component in multiple channels.
  • the maximum number of soot channels is “32”.
  • ⁇ output is not performed beyond the last sound channel.
  • it is allocated for! ⁇ Output of the echo component.
  • the original setting of the assigned sound channel is ignored. This also applies to the output of the microphone echo component.
  • the echo honored time register included in the control register set 39 is set to disable or enable echo generation and the continuous channel Number is set.
  • the data processing block 35 outputs the echo component continuously according to the setting of the echo Honored time register.
  • the microphone echo component ⁇ output is disabled or enabled in the microphone echo hold time register (illustrated “3 ⁇ 4rf”) included in the control register set 39, and the number of channels is set. According to the setting of the yield time register, the data processing block 35 outputs the microphone echo component.
  • Fig. 12 is an example of the output of the wave data as the echo component and the wave data as the microphone echo component.
  • the number of sound channels to be played is “8”, one component is assigned to sound channel # 2, and one miter eco component is assigned to sound channel # 6. That is, the effect channel for the echo component is sound channel # 2, and the effect channel for the microphone echo component is sound channel # 6.
  • the number of echo input channels is set to “2”, and the number of input channels for the microphone echo component is set to “3”. '
  • sound channel # 6 and subsequent sound channel # 7 are output with one component of Miter Eco.
  • the original setting is ignored for sound channel # 7 following the sound channel # 6 assigned to this ⁇ ⁇ mic echo component.
  • this case when the "3" «number of channels in accordance with the setting, which exceeds the final sound channel # 7, in this example, « output of the microphone echo component 2 channels divided by force ⁇ 1 no cracking.
  • the effect original sound channel can be set to an arbitrary logical sound channel, so that an arbitrary logical sound channel can be selected from the wave data reproduced by a plurality of logical sound channels. It is possible to apply effects (for example, echo z reverb etc.) Also, multiple logical sound channels are ⁇ K, and the effect channel can be set to any logical sound channel, so that wave data with acoustic effects can be played on any logical sound channel. Furthermore, if it is not necessary to add sound effects, all logical sound channels can be set to non-effect channels and assigned for playback of wave data without effects.
  • any attribute can be set for any logical sound channel.
  • the logical sound channel assigned to the song playback is set as the effect original sound channel, and the wave data being played back there is set.
  • a combination of applying a sound effect such as echo and setting the logical sound channel assigned to the playback of the sound effect to a non-efate channel and not applying the sound effect to the wave data being played back there. Is possible. It is also provided in the Hi-capability logic sound channel by applying pitch modulation and amplitude modulation to the reproduced wave data: ⁇ The same function can be used for the ave data assigned to the effect channel.
  • ⁇ analog audio signal play externally input Ana port grayed input device 5 2 such as a microphone, to Dejitano! ⁇ 1 voice signal by AD C 3 3 (P CM data) After conversion, it can be played with an effect (microphone echo function).
  • ⁇ 5 Because the CPU 5 (that is, software) performs a part of the processing to give the effect, it is possible to arbitrarily construct a configuration to give the effect.
  • the parts MB a, MB b, M s, and A s that generate in Fig. 5 are not necessarily limited to such a configuration, but can be constructed with any configuration.
  • an echo FIFO buffer is constructed in the main RAM 25, and a delay is generated by taking out the wave data after a predetermined time.
  • the « ⁇ ability is easily constructed by the Eco-FIFO buffer.
  • a part of the main RAM 25 is used as an echo FIFO buffer, it can be used for other purposes except for the area reserved for the echo FIFO buffer. The same applies to the microphone echo function.
  • the echo FIFO buffer can be used even at the age when a plurality of logical sound channels are set as the original sound channel.
  • the channel is ⁇ ft.
  • the funnel shifter 87 and the saturation processing circuit 89 of FIG. 10 are provided, the accumulated value! ⁇ The value output from the register 85 (accumulated wave data) arbitrarily of Since the bit string can be extracted, it is possible to optimize the amplitude of the wave data played back by the Fate channel.
  • the amplitude is set at a predetermined rate (echo release rate) with respect to the wave data W i retrieved and played back after a predetermined time (N a + N b) after entering the echo FIFO buffer.
  • the modulation power ⁇ 1 1 1 is provided, and the read position of the wave data Wi from the echo FIFO buffer power for reproduction (the address indicated by the read address counter 9 9) From the RAD, only a predetermined ⁇ (N b) Displaced position
  • the wave data in the LAD field is provided with a caro ⁇ I 1 3 to add the wave data with the amplitude modulation force S applied in the difficulty 11.1 (see Fig. 9) It ’s easy.
  • the microphone echo function the reverb function can be easily realized just by using the main CPU, CPU 5, which performs these processes:
  • 3 ⁇ 4 ⁇ processed by the logical sound channel is PCM data, that is, a digitaltanotator, so it is necessary for processing for imparting sound effects (for example, «processing, etc.)
  • Buffer echo FIFO buffer and microphone echo FIFO buffer
  • processing for providing an effect can be performed with a small logic circuit and Z or small software.
  • mixing of multiple logical sound channels is not performed by an adder, but is realized by time division multiplexing (switching and outputting PCM data of multiple logical sound channels in units of time). (See Fig. 8). Therefore, the mixing power of the audio DAC block 31 can be suppressed as much as the adder for mixing is unnecessary, and the cost ⁇ can be reduced.
  • the wave data reproduced by the effect channel can be assigned to the output time of the logical sound channel other than the effect channel in time division output. It is possible to give sound volume exceeding the maximum possible to wave data to which acoustic effects (effect by echo function, effect by microphone echo function) are given (see Fig. 12). Note that music playback is performed using multiple physical sound channels. If there is only one logical sound channel that reproduces the echo component, the volume balance may be insufficient. The same applies to the microphone echo component.
  • the number of physical sound channels is plural. Can be used. .

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Stereophonic System (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

Selon la présente invention, des blocs d'écho EB0 à EB(N-1) correspondent à des canaux logiques audio #0 à #(N-1). Un canal logique audio correspondant à un bloc d'écho avec un commutateur SW en position de marche est défini dans une source d'écho. Des données Web de sortie provenant de tous les blocs d'écho définis dans la source d'écho sont additionnées par un additionneur Mp. Un opérateur de décalage FS (funnel shifter) extrait une chaîne d'un nombre prédéfini de bits à partir du résultat d'addition σ. La chaîne de bits est soumise à un traitement de temporisation par des files d'attente Ba et Bb, un multiplicateur MI et un additionneur A1 puis affectée en tant que composante d'écho à un canal logique (canal d'effet). Le canal d'effet peut être désigné de façon arbitraire parmi les canaux logiques.
PCT/JP2006/320525 2005-10-20 2006-10-10 Processeur sonore et chaine audiophonique WO2007046311A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007540953A JP5055470B2 (ja) 2005-10-20 2006-10-10 サウンドプロセッサ

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2005-306257 2005-10-20
JP2005306257 2005-10-20

Publications (1)

Publication Number Publication Date
WO2007046311A1 true WO2007046311A1 (fr) 2007-04-26

Family

ID=37962405

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2006/320525 WO2007046311A1 (fr) 2005-10-20 2006-10-10 Processeur sonore et chaine audiophonique

Country Status (2)

Country Link
JP (1) JP5055470B2 (fr)
WO (1) WO2007046311A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0683343A (ja) * 1992-09-01 1994-03-25 Yamaha Corp 効果付与装置
JPH11220791A (ja) * 1998-02-02 1999-08-10 Shinsedai Kk サウンドプロセッサ
JP2000298480A (ja) * 1999-04-14 2000-10-24 Mitsubishi Electric Corp オーディオ信号レベル調整装置およびその調整方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5732497A (en) * 1980-08-06 1982-02-22 Matsushita Electric Ind Co Ltd Echo adding unit
JP2852835B2 (ja) * 1992-06-25 1999-02-03 株式会社河合楽器製作所 音響効果装置
JP3085801B2 (ja) * 1992-11-11 2000-09-11 ヤマハ株式会社 変調信号発生装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0683343A (ja) * 1992-09-01 1994-03-25 Yamaha Corp 効果付与装置
JPH11220791A (ja) * 1998-02-02 1999-08-10 Shinsedai Kk サウンドプロセッサ
JP2000298480A (ja) * 1999-04-14 2000-10-24 Mitsubishi Electric Corp オーディオ信号レベル調整装置およびその調整方法

Also Published As

Publication number Publication date
JP5055470B2 (ja) 2012-10-24
JPWO2007046311A1 (ja) 2009-04-23

Similar Documents

Publication Publication Date Title
US5812688A (en) Method and apparatus for using visual images to mix sound
US10924875B2 (en) Augmented reality platform for navigable, immersive audio experience
US7305273B2 (en) Audio generation system manager
US6490359B1 (en) Method and apparatus for using visual images to mix sound
US7254540B2 (en) Accessing audio processing components in an audio generation system
JP2549253B2 (ja) パーソナルコンピュータシステム動作方法及びパーソナルコンピュータシステム
JP5241805B2 (ja) タイミング・オフセット許容型カラオケゲーム
EP1035732A1 (fr) Dispositif et procede de presentation d'un son et d'une image
CA2355794C (fr) Methode et appareil pour mettre des donnees en antememoire a l'avance dans une memoire auditive
US11930348B2 (en) Computer system for realizing customized being-there in association with audio and method thereof
CN110447071A (zh) 信息处理装置、信息处理方法和程序
US7369665B1 (en) Method and apparatus for mixing sound signals
US7386356B2 (en) Dynamic audio buffer creation
JP5345780B2 (ja) データ処理
WO2007046311A1 (fr) Processeur sonore et chaine audiophonique
CA3044260A1 (fr) Plate-forme de realite augmentee pour une experience audio a navigation facile et immersive
US9705953B2 (en) Local control of digital signal processing
JP2007132961A (ja) マルチメディアプロセッサ及びサウンドプロセッサ
JP2004205738A (ja) 楽音生成装置、楽音生成プログラムおよび楽音生成方法
US7089068B2 (en) Synthesizer multi-bus component
WO2022228174A1 (fr) Procédé de rendu et dispositif associé
CN117369766A (zh) 音频资源处理方法、装置、存储介质及电子装置
GB2474680A (en) An audio processing method and apparatus
Gupta et al. Incorporation of audio in virtualized reality
JP2005055976A (ja) 画像処理装置及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
ENP Entry into the national phase

Ref document number: 2007540953

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06811796

Country of ref document: EP

Kind code of ref document: A1