EP2184869B1 - Procédé et dispositif pour le traitement de signaux audio - Google Patents

Procédé et dispositif pour le traitement de signaux audio Download PDF

Info

Publication number
EP2184869B1
EP2184869B1 EP08019429.3A EP08019429A EP2184869B1 EP 2184869 B1 EP2184869 B1 EP 2184869B1 EP 08019429 A EP08019429 A EP 08019429A EP 2184869 B1 EP2184869 B1 EP 2184869B1
Authority
EP
European Patent Office
Prior art keywords
audio
processing
unit
processing unit
audio signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP08019429.3A
Other languages
German (de)
English (en)
Other versions
EP2184869A1 (fr
Inventor
Attila Karamustafaoglu
Marten Sterngren
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Studer Professional Audio GmbH
Original Assignee
Studer Professional Audio GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Studer Professional Audio GmbH filed Critical Studer Professional Audio GmbH
Priority to EP08019429.3A priority Critical patent/EP2184869B1/fr
Publication of EP2184869A1 publication Critical patent/EP2184869A1/fr
Application granted granted Critical
Publication of EP2184869B1 publication Critical patent/EP2184869B1/fr
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/02Arrangements for generating broadcast information; Arrangements for generating broadcast-related information with a direct linking to broadcast information or to broadcast space-time; Arrangements for simultaneous generation of broadcast information and broadcast-related information
    • H04H60/04Studio equipment; Interconnection of studios

Definitions

  • the invention relates to a device for processing audio signals and a method of processing audio signals.
  • the invention is for example applicable to audio mixing devices, such as digital mixers.
  • audio signals are often processed digitally, e.g. using a digital mixing console.
  • audio signals received over tens or hundreds of audio channels have to be processed.
  • the processing needs to be performed with very low latencies.
  • a digital mix engine requires a very high processing power.
  • Digital mixers are known in the art which use a plurality of digital signal processors (DSPs) for processing audio signals.
  • DSPs digital signal processors
  • the implementation of a digital mixer using DSPs has the drawback that the DSPs need to be programmed in a low-level language, such as Assembler.
  • the programming of a mixing engine in such an architecture thus becomes very time and labor intensive. Further, the programming cannot be easily adapted, e.g. when changing to a new generation of DSPs.
  • FPGAs field-programmable gate arrays
  • FPGAs offer high processing powers at low cost and are capable of achieving low latencies even for a large number of channels, which makes them particularly suitable for digital mixers for live applications.
  • FPGAs provide a good performance, they require a very high programming effort, as they need to be programmed with both the structure of the signal path and the algorithm of the mix engine.
  • the programming needs to be performed in a low-level language, e.g. a hardware description language (HDL), making the programming very laborious.
  • HDL hardware description language
  • Another reason for the very time and cost intensive development of a FPGA-based audio engine is that each design and debugging phase takes a large amount of time.
  • a further difficulty is the migration of a once programmed mixing engine to a new version of a FPGA chip.
  • the optimization and maintenance of a programmed mixing engine is complicated.
  • Digital mixers have also been implemented with software on personal computers (PCs). Such mixers often use audio signals recorded on a hard drive of the PC for mixing. Although having a high processing power, central processing units (CPUs) of personal computers are not capable of mixing tens or hundreds of digital channels with very low latencies. PC-based digital mixers are thus generally not suited for live applications.
  • PCs personal computers
  • a device for processing audio signals comprising an interface unit for receiving a plurality of audio signals.
  • the device further comprises a first processing unit adapted to perform an audio processing of an audio signal and a second processing unit adapted to perform a summation of audio signals.
  • a routing unit of the device is configured to route received audio signals between the interface unit, the first processing unit and the second processing unit.
  • the device is further provided with a control unit adapted to provide control information for each of the audio signals indicating how an audio signal is to be processed.
  • the control unit controls the routing unit such that the received audio signal for which the control information indicates that a audio signal is to be audio processed is routed to the first processing unit and that a received audio signal for which the control information indicates that the audio signals is to be summed with another audio signal is routed to the second processing unit.
  • the device is implemented on a personal computer.
  • the first processing unit is a central processing unit of the personal computer and comprises an X86 compatible processor.
  • the second processing unit is provided on a computer expansion card.
  • Audio processing may for example comprise the application of gain control, equalizers, reverb, other filters, an adjustment of dynamics and the like.
  • the function performed by the second processing unit is relatively simple, it may be efficiently programmed in a low-level language.
  • more complex audio processing functions may be performed at the first processing unit, which may be programmed in a higher level language. The programming of the device is thus facilitated.
  • the splitting of the more complex and the less complex functions between two processing units may further enable an implementation of a digital mixing engine with low latency, as the type of processing unit and the programming can be optimized for the particular functionality.
  • the first processing unit comprises an x86 compatible processor.
  • This may for example be a standard commercially available processor, which is available at relatively low costs, while being programmable in a high-level language. Thus, it is particularly suitable for more complex audio processing functions. Further, for this type of processor, new processor generations with increased calculation power are continuously being developed, and the device of the embodiment will benefit from such new developments. This is particularly true if the programming is performed in a higher level language, as the programming may then be easily adapted and transferred to a new processor generation.
  • the second processing unit is adapted to perform a summation of plural audio signals in parallel.
  • the second processing unit may comprise a field-programmable gate array (FPGA) or a graphics processing unit (GPU).
  • the second processing unit may be provided on a computer expansion card, such as a peripheral component interconnect express (PCIe) card.
  • PCIe peripheral component interconnect express
  • FPGAs and GPUs are particularly well suited to perform a large number of calculations in parallel. They can achieve high calculation performances reaching up to several hundreds of gigaflops per second.
  • FPGAs and GPUs can also be obtained cost-efficiently. Further, as the summation of audio signals is a relatively simple task, programming of a FPGA or GPU is facilitated. Such processing units have also experienced a steep increase in processing power in recent years, and it can be expected that their performance will be further enhanced in the future. By configuring the second processing unit in the above-mentioned way, the device of the present invention can make use of such an increase in processing power. This is particularly true when implementing the second processing unit on a computer expansion card, as this may be easily exchanged. As mainly relatively simple tasks that can be parallelized may be performed on the second processing unit, a migration of the programming to a new faster FPGA or GPU is facilitated.
  • the interface unit is connected to an audio network over which the audio signals are received in digital form.
  • the interface unit may be adapted to receive the audio signals in a fixed-point or a floating-point representation.
  • the processing of the audio signals at the first or the second processing unit may generally occur using a floating-point representation.
  • the second processing unit may thus be further adapted to perform a conversion of a received audio signal from a fixed-point representation to a floating point representation.
  • the second processing unit may be particularly well adapted for performing such a relatively simple task which benefits from parallelization.
  • control unit is configured to control the audio processing of audio signals by providing at least part of the control information to the first processing unit and/or to control the summation of audio signals by providing at least part of the control information to the second processing unit.
  • the control unit may comprise an own processing unit separate from the first processing unit and the second processing unit.
  • the control unit may further comprise a user interface, the control unit being adapted to determine the control information on the basis of control parameters entered by a user via the user interface.
  • the control information may thus comprise information on where to route an audio signal, how to audio-process the audio signals, and which audio signals are to be summed.
  • the routing of the audio signals to the first or the second processing unit is preferably performed automatically.
  • a user may for example enter the information that a first audio signal is to be audio-processed, and then summed with a second audio signal, in response to which the control unit may determine how the first and the second audio signals are to be routed. It may then provide the appropriate control information or control signals to the first and the second processing unit, as well as to the routing unit.
  • the routing unit and/or the control unit may be implemented in the first processing unit.
  • the first processing unit may for example run software, which implements the routing unit and/or the control unit.
  • the device may be implemented on a conventional personal computer, with the second processing unit provided in the form of a computer expansion card.
  • the first processing unit is configured to run a real-time operating system.
  • the audio processing may be performed with very low latencies.
  • the routing of audio signals may be implemented in an efficient way using such a processing unit.
  • the device is adapted to introduce a delay of less than four milliseconds between receiving an audio signal via the interface unit and outputting a processed audio signal via the interface unit after a processing of the received audio signal with at least one of the first and the second processing unit.
  • a configuration of the device is particularly advantageous for the processing of audio signals for live applications. Such a delay is hardly noticeably to the human ear and such a device may accordingly be employed for a wide range of audio processing tasks.
  • the first processing unit may be adapted to introduce a delay of less than eight samples when audio-processing an audio signal and the second processing unit may be adapted to introduce a delay of less than six samples when summing audio signals.
  • Such configurations enable very low latencies for an audio processing with the inventive device.
  • a method of processing audio signals comprises the step of receiving a plurality of audio signals at an interface unit. Control information for each of said audio signals indicating how an audio signal is to be processed is then received. A received audio signal for which the control information indicates that the audio signal is to be audio-processed is routed to a first processing unit adapted to perform an audio processing of an audio signal. A received audio signal for which the control information indicates that the audio signal is to be summed with another audio signal is routed to a second processing unit adapted to perform a summation of audio signals. Similar advantages as outlined above with respect to the device for processing audio signals are also achieved with the inventive method of processing audio signals.
  • the method is performed by a device for processing audio signals which is implemented on a personal computer.
  • the first processing unit is a central processing unit of the personal computer and comprises an X86 compatible processor.
  • the second processing unit is provided on a computer expansion card.
  • an audio signal is routed in accordance with the control information received for a received audio signal.
  • the received audio signal is routed to the first processing unit for an audio processing of the audio signal.
  • the processed audio signal is then routed to the second processing unit for summation with another audio signal.
  • the summed audio signal is then routed to the first processing unit for an audio processing of the summed audio signal.
  • This may be considered a post-processing. Accordingly, the method of the present embodiment enables a comprehensive processing of audio signals. It should be clear that in other embodiments an audio signal may only be routed to one of the first and the second processing units or may be routed several times between those processing units.
  • a received audio signal is routed to the second processing unit for performing a conversion of the audio signal from a floating-point representation to a fixed-point representation after the audio signal was processed according to the respective control information.
  • the converted audio signal is then routed to the interface unit for outputting the audio signal.
  • an audio processing system comprising at least two of the above-mentioned devices for processing audio signals.
  • redundancy can be achieved and the probability of a failure of the audio processing system can be reduced.
  • the number of audio signals that may be processed with such an audio processing system can be increased, as the processing is performed with at least two of the above devices.
  • the audio processing system further comprises at least one additional routing unit interfacing the routing unit of a first of the at least two devices and the routing unit of a second of the at least two devices for routing audio signals to the first and the second device.
  • the processing load can be shared between the at least two devices.
  • the audio processing system may further comprise at least one additional control unit interfacing the control unit of a first of the at least two devices and the control unit of a second of the at least two devices for providing control information for the first and the second device.
  • the additional control unit may provide control information for at least one of the routing unit, the first processing unit and the second processing unit of each of said at least two devices.
  • the additional control unit may comprise a user interface and may be adapted to determine the control information on the basis of control parameters entered by a user via the user interface.
  • the additional control unit may also control the additional routing unit. Upon entering control parameters by a user, the additional control unit may automatically determine where to route audio signals.
  • Such a system is scalable and can achieve very high processing powers.
  • additional devices may be added to such a system. Further, if one device fails, the system remains functional and the device may be replaced without interrupting the operating of the system.
  • Fig. 1 schematically shows a device 100 for processing audio signals with a first processing unit 101 comprising an x86 compatible central processing unit (CPU).
  • Processing unit 101 may for example comprise a commercially available processor, such as a Pentium dual core, or quad core processor, or a corresponding AMD processor, such as an AMD Opteron, Phenum or Turion processor.
  • Processing unit 101 implements an algorithm for performing an audio processing of audio signals. It may comprise further components common to a processing unit, such as different types of memory, e.g. random access memory (RAM) or non-volatile memory, such as a hard drive or flash memory, in which audio processing algorithms may be stored.
  • RAM random access memory
  • non-volatile memory such as a hard drive or flash memory
  • Processing unit 101 is particularly well adapted to perform a sequential processing of audio signals, although it may also process signals in parallel, e.g. when comprising multiple processing cores.
  • Being a standard commercially available CPU the CPU can be obtained at relatively low costs.
  • Employing a standard CPU further has the benefit that it may be regularly replaced with new and faster models as soon as they become available. This is particularly the case as the programming of the CPU can be performed in a high-level language, so that when replacing the CPU with another model, a migration may be performed by a simple recompilation of the program code of the audio processing procedures.
  • Device 100 further comprises a second processing unit 102 which is particularly well adapted for performing a parallel processing.
  • a second processing unit 102 comprises a graphics processing unit (GPU) and may comprise further components, such as memory and the like.
  • the second processing unit 102 is thus particularly well suited for processing tasks that can be parallelized, such as the summation of several audio signals or a format conversion of audio signals, e.g. from fixed-point to floating-point representation and vice versa.
  • a GPU can be cost-efficiently obtained and can provide a processing power already exceeding that of CPUs.
  • device 100 can benefit from such a development, in particular if the second processing unit 102 is provided on a computer expansion card so that it can be easily replaced.
  • Routing unit 103 routes audio signals received via interface unit 104 to processing unit 101 and processing unit 102, as well as between said processing units.
  • the interface unit 104 receives the audio signals on separate audio channels, e.g. over an audio network using an audio over IP, a multichannel audio digital interface (MADI) or any other, preferably digital, audio format. Audio signals of different audio channels may thus also be received sequentially as data packets, or over channels. Audio signals may e.g. by received via audio over Ethernet using a protocol such as the Audio/Video ridging (AVB) protocol.
  • AVB Audio/Video ridging
  • Control unit 105 interfaces processing unit 101 and 102 as well as routing unit 103.
  • Control unit 105 comprises control information and provides corresponding control data or signals to the connected units.
  • the control information can for example describe the type of audio processing that is to be performed on a particular audio signal or channel, and which audio signals or channels are to be mixed by summation.
  • the control information may for example indicate that the audio signals of two channels are to be summed and then audio-processed by e.g. applying a filter or gain control.
  • Control unit 105 now controls routing unit 103 in such a way that audio signals that are to be summed are routed to the processing unit 102, whereas audio signals that are to be audio-processed are routed to processing unit 101.
  • Control unit 105 may thus automatically determine the path of a particular audio signal. In such a configuration, operations to be performed on an audio signal are efficiently shared between the two processing units, with each processing unit performing the task which it is best suited for. The device 100 accordingly achieves a high throughput and low latency for the processing of a large number of audio signals.
  • the introduced delay can be less than four milliseconds, preferably less than two milliseconds.
  • a delay of about one millisecond may be achieved at a sampling rate of 48 kHz for the audio signals. It should be clear that these are only examples, and that audio signals may be sampled at other frequencies, e.g. 96 kHz or even higher, and that more or fewer samples may be buffered, depending on the complexity of the processing.
  • Control unit 105 may further comprise a user interface, using which a user can control how audio signals are processed and mixed, e.g. by entering control parameters. Control unit 105 then supplies control parameters associated with functions implemented at processing unit 101 to the same, and provides control parameters associated with functions implemented at the processing unit 102 to the same. A user is thus given the possibility to comprehensively control the processing of audio signals received via interface unit 104, while control unit 105 and routing unit 103 work together to ensure that the audio signals are routed to the correct unit for processing.
  • the first processing unit of device 200 is again implemented as a x86 compatible CPU 201.
  • a second processing unit comprises the field-programmable gate array (FPGA) 202.
  • FPGA 202 is adapted to perform a parallel processing of a plurality of audio signals. In particular, it is adapted to perform a summation of various audio signals. By receiving audio signals on a relatively large number of input channels and outputting summed signals on a large number of output channels, a large summation matrix can be obtained, the implementation of which requires a high processing power.
  • FPGA 202 is capable of delivering the required processing power. This is particularly the case if an unlimited switch matrix is to be realized.
  • the FPGA can be programmed to perform a summation of a large number of audio channels in parallel.
  • the underlying structure of the FPGA can be programmed, e.g. using a hardware description language (HDL)
  • the FPGA can be adapted to perform such relatively simple tasks very efficiently.
  • simple tasks such as summation and others, are programmed into the FPGA, a migration to another version of a FPGA is facilitated.
  • the FPGA may not only be programmed with a summation functionality, e.g. a mix bus, but also with other functionalities that are relatively simple to program and benefit from a parallel computation.
  • Device 200 receives audio signals via input/output (I/O) cards 204. By providing several I/O cards as an interface unit, a large number of audio channels can be connected to device 200. Device 200 receives synchronous audio data via I/O cards 204, which are then transported over a PCI express (PCIe) bus to CPU 201.
  • PCIe PCI express
  • CPU 201 implements the routing unit, i.e. by software running on CPU 201.
  • CPU 201 runs a real-time operating system, such as a real-time extension for Windows® or Linux®.
  • the routing functionality implemented at CPU 201 thus provides audio signals to FPGA 202, routes the signals to I/O cards 204 and provides the audio signals to an audio processing algorithm implemented at CPU 201.
  • the PCI express bus is used.
  • the PCI express bus enables very high data transfer rates, ranging up to several gigabytes per second.
  • CPU 201 may comprise plural processing cores or that plural CPUs may be provided.
  • the routing functionality may be performed by one core of CPU 201, whereas the audio processing may be performed on remaining cores.
  • the audio processing may be performed on remaining cores.
  • Control unit 205 of the present embodiment comprises an own CPU and a user interface.
  • Control unit 205 provides control information to CPU 201, e.g. via an Ethernet connection.
  • CPU 201 processes audio signals according to the provided control information, and routes the audio signals to FPGA 202 for summation and/or conversion.
  • audio signals may be received in a fixed-point representation over I/O cards 204, a conversion to floating point may be performed in FPGA 202 before processing, and again a conversion to fixed-point representation may be performed before outputting the processed audio signals.
  • the interface unit may also receive the audio signals in a floating-point format, so that no conversion is necessary.
  • Device 200 further comprises the virtual studio technology (VST) unit 206.
  • VST unit 206 may implement software audio synthesizers, effect plug-ins and the like. The VST plug-ins may be run on a separate processing unit.
  • VST unit 206 interfaces CPU 201 via an Ethernet link. Alternatively, VST plug-ins may also be implemented on CPU 201.
  • VST unit 206 may provide additional audio signals or may process audio signals received via the interface cards 204.
  • VST unit 206 is controlled by control unit 205 via an Ethernet link. Control parameters of VST plug-ins running on unit 206 can be controlled via the user interface of control unit 205.
  • control unit 205 may also be implemented in CPU 201.
  • CPU 201 may then run a control application and may communicate with a user by interfacing a display and an input unit comprising controls.
  • device 200 is a high-performance digital mixer capable of simultaneously processing a large number of audio channels with very low latencies. Further, it can be implemented with commercially available hardware and can thus benefit from hardware developments. High processing performance is achieved by separating the processing functionalities between two processing units, wherein one unit is optimized for parallel processing of audio signals.
  • Fig. 3 is a flow diagram showing a method according to an embodiment of the present invention.
  • the method may for example be implemented at one of the devices 100 and 200.
  • An audio signal in the form of an audio stream is received at an interface unit in step 301.
  • control information is received from a control unit.
  • Received audio signals are routed to a first processing unit in step 303. This may be performed only for some received audio signals, according to the control information provided for the respective audio signal.
  • the audio signals are then audio-processed at the first processing unit (step 304).
  • the first processing unit e.g. a x86 compatible processor, may run audio processing algorithms for performing gain control, applying equalizers, reverb, dynamic range compression or expansion, audio filtering and the like.
  • Different signals corresponding to different channels may be sequentially processed or may be processed in parallel, e.g. on plural cores of the first processing unit.
  • the audio signals are routed to the second processing unit in step 305.
  • audio signals are summed up at the second processing unit in step 306.
  • the summing may yield any combination of audio signals on a number of output channels.
  • the summed up audio signals are again routed to the first processing unit in step 307 for post-processing. An equalizing or a gain adjustment may for example again be performed.
  • the audio signals are routed to the interface unit for outputting in step 309.
  • the present method may be applied to audio streams and may thus be continuously performed.
  • Providing the control information in step 302 may only be performed when initially setting up the processing of audio signals. With the above method, a comprehensive audio processing and the summation of a large number of audio channels can be achieved.
  • step 401 audio signals are received on channels 1-N.
  • the total number N of channels depends on the system configuration and may be as high as several hundreds or even higher.
  • steps 1-3 and channel N are explicitly shown.
  • the right hand side of Fig. 4 indicates the unit at which the respective method steps are performed.
  • Steps 402, which are performed at a CPU comprise an audio processing which may be different for each channel.
  • the channels or more particular the audio signals carried on the channels are summed in step 403.
  • the signals of some output channels here bus output 1 and 2 are again routed to the CPU for performing another audio processing in steps 404.
  • Step 405 the audio signals are output on channels 1-M.
  • the number M of output channels may be the same as the number N of input channels, yet it may also be different.
  • the way in which audio channels are audio-processed and summed is determined by control information provided by a control unit. It should be clear that the method may comprise further steps, e.g. after the audio processing in steps 404 another summation may be performed as well as another audio processing, etc.
  • Fig. 5 shows an embodiment of an audio processing system.
  • the audio processing system 500 comprises two devices 501 and 502 for processing audio signals which may be configured according to any of the above-described embodiments.
  • both devices have a control unit and a routing unit implemented on their first processing unit (CPU).
  • CPU first processing unit
  • devices 501 and 502 interface a high performance network 503 for audio data.
  • the high-performance network 503 may be implemented by using a PCI express bus. As said before, such a bus achieves an excellent data throughput.
  • Audio input and output is performed by a third device 506, which comprises I/O cards 508 connected to an audio network. Audio signals may be received in the MADI format or via audio over IP.
  • Device 506 routes received audio signals to devices 501 and 502 for processing via the network interface 507 and the high-performance network 503.
  • Device 506 may itself also perform a processing of audio signals, e.g. a summation of signals received from devices 501 and 502.
  • Device 506 may thus also be configured according to any of the above-mentioned embodiments.
  • System 500 further comprises control unit 509 with user interface 510. These may be implemented on a separate work station and may for example comprise a whole audio mixing control console. Control unit 509 interfaces devices 501, 502 and 507 via standard network 511, e.g. Ethernet. Corresponding network interfaces are not shown for simplicity. A standard network may be employed, as the transport of control information is not subject to such stringent requirements as the transport of audio signals, where a minimum delay for large amounts of data needs to be achieved. In accordance with information entered by a user, control unit 509 provides control information on how particular channels are to be processed and summed. The control unit may further directly control to which of the devices 501 and 502 a particular audio channel is routed. On the other hand, control unit 509 may leave that decision to device 506, which may perform a load balancing between devices 501 and 502.
  • standard network 511 e.g. Ethernet.
  • Corresponding network interfaces are not shown for simplicity.
  • a standard network may be employed, as the transport of control information is not
  • a redundancy is further achieved. If one of the devices 501 and 502 fails, the other device may take over the processing of concerned audio signals. The processing is thus not interrupted, and the failed component may be replaced without the need to stop the operation of system 500. As two devices 501 and 502 process audio signals, a large number of channels can be realized.
  • further devices may be connected to the high-performance network 503 and the standard network 511. The system is thus easily scalable. Similarly, further control units with further mixing consoles may be connected to network 511. Other components, such as media servers or VST work stations implementing different VST plug-ins may be connected to either network 511 or 503. The versatility and functionality of the system can thus be further enhanced. It should be clear that device 506 may also be provided in a redundant configuration.
  • a large audio mixing console with a high processing performance can be realized at relatively low costs. Further, such a console can fully benefit from new developments in processor technology, as the hardware components, such as CPU and GPU/FPGA, may be easily upgraded. Smaller mixing consoles can be realized by using only one of the above-mentioned devices. Even such a small system achieves an excellent audio processing performance and is easily upgraded.
  • Fig. 6 shows schematically another embodiment of an inventive audio processing system.
  • the audio processing system 600 comprises plural devices 601, 602 and 603 for processing audio signals. It should be clear that more or less than three devices may be provided. Audio signals can be exchanged between the devices via the high performance network 609.
  • the devices receive control information from host controller 610 via network 611, which may be a standard network, such as Ethernet.
  • Each of the devices 601-603 comprises a central processing unit 604 for performing an audio processing, which may also implement a routing unit and a control unit. For a summation of audio signals, the audio signals are routed to unit 605, which may again be a GPU or a FPGA.
  • each of the devices 601-603 may comprise an interface unit 606, which may be connected to an audio network. Audio signals received via the interface of one device may be processed by the device or may be routed to another device for processing via the high-performance network 609.
  • a system using such a configuration is easily scalable, and is capable of processing a large number of audio channels, as each device may receive and output audio signals.
  • the system 600 and the devices 601-603 may comprise further elements not shown in Fig. 6 , such as interface units towards the high-performance network 609 and network 611, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)

Claims (22)

  1. Dispositif de traitement de signaux audio, comprenant :
    - une unité d'interface (104) pour recevoir une pluralité de signaux audio ;
    - une première unité de traitement (101) adaptée pour réaliser un traitement audio d'un signal audio ;
    - une seconde unité de traitement (102) adaptée pour réaliser une sommation de signaux audio ;
    - une unité de routage (103) configurée pour router des signaux audio reçus entre ladite unité d'interface (104), ladite première unité de traitement (102) et ladite seconde unité de traitement (103) ; et
    - une unité de commande (105) adaptée pour fournir des informations de commande pour chacun desdits signaux audio indiquant la façon dont un signal audio doit être traité, l'unité de commande (105) commandant ladite unité de routage (103) de sorte qu'un signal audio reçu, pour lequel les informations de commande indiquent que le signal audio doit être sommé avec un autre signal audio, soit routé vers la seconde unité de traitement (102) et qu'un signal audio reçu, pour lequel les informations de commande indiquent que le signal audio doit être traité autrement, soit routé vers la première unité de traitement (101),
    dans lequel le dispositif est implémenté sous forme d'ordinateur personnel avec une carte d'extension d'ordinateur sur laquelle est prévue la seconde unité de traitement, caractérisé en ce que
    la première unité de traitement (101) est une unité centrale de traitement de l'ordinateur personnel et comprend un processeur compatible avec X86.
  2. Dispositif selon la revendication 1, dans lequel la seconde unité de traitement (102) est adaptée pour réaliser une sommation de plusieurs signaux audio en parallèle.
  3. Dispositif selon l'une quelconque des revendications précédentes, dans lequel la seconde unité de traitement (102) comprend un réseau de portes programmables ou une unité de traitement graphique.
  4. Dispositif selon l'une quelconque des revendications précédentes, dans lequel la carte d'extension d'ordinateur est une carte express (Express card) d'interconnexion de composants périphériques.
  5. Dispositif selon l'une quelconque des revendications précédentes, dans lequel l'unité d'interface (104) est connectée à un réseau audio sur lequel lesdits signaux audio sont reçus sous forme numérique.
  6. Dispositif selon la revendication 5, dans lequel l'unité d'interface (104) est adaptée pour recevoir les signaux audio dans une représentation à virgule fixe ou à virgule flottante.
  7. Dispositif selon l'une quelconque des revendications précédentes, dans lequel la seconde unité de traitement (102) est en outre adaptée pour réaliser une conversion d'un signal audio reçu d'une représentation à virgule fixe à une représentation à virgule flottante.
  8. Dispositif selon l'une quelconque des revendications précédentes, dans lequel l'unité de commande (105) est en outre configurée pour commander ledit traitement audio de signaux audio en fournissant au moins une partie desdites informations de commande à la première unité de traitement (101) et/ou pour commander ladite sommation de signaux audio en fournissant au moins une partie desdites informations de commande à la seconde unité de traitement (102).
  9. Dispositif selon l'une quelconque des revendications précédentes, dans lequel l'unité de commande (105) comprend une unité de traitement propre séparée de ladite première unité de traitement (101) et de ladite seconde unité de traitement (102).
  10. Dispositif selon l'une quelconque des revendications précédentes, dans lequel l'unité de commande (509) comprend une interface utilisateur (510), l'unité de commande étant adaptée pour déterminer les informations de commande sur la base de paramètres de commande entrés par un utilisateur via ladite interface utilisateur (510).
  11. Dispositif selon l'une quelconque des revendications précédentes, dans lequel l'unité de routage (103) et/ou l'unité de commande (105) est (sont) implémentée(s) dans ladite première unité de traitement (101).
  12. Dispositif selon l'une quelconque des revendications précédentes, dans lequel la première unité de traitement (101) est configurée pour exécuter un système d'exploitation en temps réel.
  13. Dispositif selon l'une quelconque des revendications précédentes, dans lequel le dispositif est adapté pour introduire un retard de moins de 4 millisecondes entre la réception d'un signal audio via l'unité d'interface (104) et la fourniture en sortie d'un signal audio traité via ladite unité d'interface (104) après un traitement du signal audio reçu avec au moins l'une de la première et de la seconde unité de traitement (101, 102).
  14. Procédé de traitement de signaux audio, le procédé étant réalisé par un dispositif de traitement de signaux audio, comprenant :
    - la réception d'une pluralité de signaux audio dans une unité d'interface (104) ;
    - la réception d'informations de commande pour chacun desdits signaux audio indiquant la façon dont un signal audio doit être traité ;
    - le routage d'un signal audio reçu pour lequel les informations de commande indiquent que le signal audio doit être sommé avec un autre signal audio vers une seconde unité de traitement (102) adaptée pour réaliser une sommation de signaux audio,
    - le routage d'un signal audio reçu pour lequel les informations de commande indiquent que le signal audio doit être traité autrement, vers une première unité de traitement (101) adaptée pour réaliser un traitement audio d'un signal audio, dans lequel le dispositif est implémenté sous forme d'ordinateur personnel avec une carte d'extension d'ordinateur sur laquelle est prévue la seconde unité de traitement,
    caractérisé en ce que
    la première unité de traitement (101) est une unité centrale de traitement de l'ordinateur personnel et comprend un processeur compatible avec X86.
  15. Procédé selon la revendication 14, comprenant en outre le routage d'un signal audio reçu vers la seconde unité de traitement (102) pour une conversion du signal audio reçu d'une représentation à virgule fixe à une représentation à virgule flottante.
  16. Procédé selon la revendication 14 ou 15, comprenant en outre la fourniture desdites informations de commande par une unité de commande (509) ayant une interface utilisateur (510), les informations de commande étant déterminées sur la base de paramètres de commande entrés par un utilisateur via ladite interface utilisateur (510).
  17. Procédé selon l'une quelconque des revendications 14 à 16, comprenant en outre, conformément aux informations de commande reçues pour un signal audio reçu,
    - le routage du signal audio reçu vers la première unité de traitement (101) pour un traitement audio du signal audio ;
    - le routage du signal audio traité vers la seconde unité de traitement (102) pour une sommation avec un autre signal audio ; et
    - le routage du signal audio sommé vers la première unité de traitement (101) pour un traitement audio du signal audio sommé.
  18. Procédé selon l'une quelconque des revendications 14 à 17, comprenant en outre :
    - le routage d'un signal audio reçu après traitement du signal audio selon lesdites informations de commande vers ladite seconde unité de traitement (102) pour réaliser une conversion du signal audio d'une représentation à virgule flottante à une représentation à virgule fixe et
    - le routage du signal audio converti vers l'unité d'interface (104) pour fournir en sortie le signal audio.
  19. Système de traitement audio (500), comprenant au moins deux dispositifs (501 ; 502) selon l'une des revendications 1 à 13.
  20. Système de traitement audio selon la revendication 19, comprenant en outre au moins une unité de routage supplémentaire (506) interfaçant l'unité de routage d'un premier (501) desdits au moins deux dispositifs et l'unité de routage d'un second (502) des au moins deux dispositifs pour router des signaux audio vers ledit premier et ledit second dispositif.
  21. Système de traitement audio selon la revendication 19 ou 20, comprenant en outre au moins une unité de commande supplémentaire (509) interfaçant l'unité de commande d'un premier (501) desdits au moins deux dispositifs et l'unité de commande d'un second (501) desdits au moins deux dispositifs pour fournir des informations de commande pour lesdits premier et second dispositifs.
  22. Système de traitement audio selon l'une quelconque des revendications 19 à 21, dans lequel les au moins deux dispositifs sont connectés via un réseau audio.
EP08019429.3A 2008-11-06 2008-11-06 Procédé et dispositif pour le traitement de signaux audio Active EP2184869B1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP08019429.3A EP2184869B1 (fr) 2008-11-06 2008-11-06 Procédé et dispositif pour le traitement de signaux audio

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP08019429.3A EP2184869B1 (fr) 2008-11-06 2008-11-06 Procédé et dispositif pour le traitement de signaux audio

Publications (2)

Publication Number Publication Date
EP2184869A1 EP2184869A1 (fr) 2010-05-12
EP2184869B1 true EP2184869B1 (fr) 2017-06-14

Family

ID=41258836

Family Applications (1)

Application Number Title Priority Date Filing Date
EP08019429.3A Active EP2184869B1 (fr) 2008-11-06 2008-11-06 Procédé et dispositif pour le traitement de signaux audio

Country Status (1)

Country Link
EP (1) EP2184869B1 (fr)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5533386B2 (ja) * 2010-07-20 2014-06-25 ヤマハ株式会社 音響信号処理装置
DE102010044407B4 (de) * 2010-09-04 2021-04-08 Lawo Holding Ag Vorrichtung und Verfahren zur Verarbeitung von digitalisierten Audiodaten
EP2432224A1 (fr) * 2010-09-16 2012-03-21 Harman Becker Automotive Systems GmbH Système multimédia
CN104143334B (zh) * 2013-05-10 2017-06-16 中国电信股份有限公司 可编程图形处理器及其对多路音频进行混音的方法
CN107728199B (zh) * 2017-09-22 2019-05-31 中国地质大学(北京) 基于多gpu并行的多分量各向异性叠前时间偏移加速方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1872242A4 (fr) * 2005-04-19 2009-10-21 Fairlight Au Pty Ltd Systeme et procede de traitement de media

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
EP2184869A1 (fr) 2010-05-12

Similar Documents

Publication Publication Date Title
EP2184869B1 (fr) Procédé et dispositif pour le traitement de signaux audio
US20110265134A1 (en) Switchable multi-channel data transcoding and transrating system
US8370605B2 (en) Computer architecture for a mobile communication platform
EP3087472B1 (fr) Système et techniques d'entrée/sortie évolutifs
US20070043804A1 (en) Media processing system and method
JP2003303102A (ja) 画像処理装置
US20080158338A1 (en) Hardware architecture for video conferencing
US11909509B2 (en) Distributed audio mixing
US20070198991A1 (en) Microcontrol architecture for a system on a chip (SoC)
JP5802215B2 (ja) 複数の粒度を持つストリームを処理するためのプログラム、コンピュータシステムおよび方法
US9342564B2 (en) Distributed processing apparatus and method for processing large data through hardware acceleration
US10972520B1 (en) Monitor mixing system that distributes real-time multichannel audio over a wireless digital network
JP6993515B2 (ja) 高度自動運転が可能な車両用の制御機器のためのデータストリームの分配のための分配装置および方法
WO2011001303A1 (fr) Procédé, appareil et programme d'ordinateur permettant la mise en œuvre de fonctions multimédia à l'aide d'un composant enveloppeur logiciel
US11695535B2 (en) Reconfigurable mixer design enabling multiple radio architectures
CN115002127A (zh) 一种分布式音频***
KR102238720B1 (ko) 인코딩과 업로딩의 병행 처리를 통해 미디어 파일의 전송 시간을 단축시킬 수 있는 방법 및 시스템
EP3196756B1 (fr) Dispositif de commande
US20050097140A1 (en) Method for processing data streams divided into a plurality of process steps
US20140303761A1 (en) Real Time Digital Signal Processing
Cannon et al. Modular delay audio effect system on FPGA
JP2013009044A (ja) 制御装置、処理装置、処理システム、制御プログラム
US8230142B1 (en) Method and apparatus for providing egress data in an embedded system
EP2388706A1 (fr) Procédé et système de diffusion en continu en temps réel et de stockage
WO2022113736A1 (fr) Dispositif de traitement de signal

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA MK RS

17P Request for examination filed

Effective date: 20100806

AKX Designation fees paid

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: STUDER PROFESSIONAL AUDIO GMBH

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20170221

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

Ref country code: AT

Ref legal event code: REF

Ref document number: 901879

Country of ref document: AT

Kind code of ref document: T

Effective date: 20170615

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602008050645

Country of ref document: DE

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20170614

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170915

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170914

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 901879

Country of ref document: AT

Kind code of ref document: T

Effective date: 20170614

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170914

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171014

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602008050645

Country of ref document: DE

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

26N No opposition filed

Effective date: 20180315

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171130

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171130

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171106

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20180731

Ref country code: BE

Ref legal event code: MM

Effective date: 20171130

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171106

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171130

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171106

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171130

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20081106

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170614

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170614

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230527

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20231109

Year of fee payment: 16

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20231031

Year of fee payment: 16