CN110838990A - Method and device for accelerating layer1 in C-RAN - Google Patents

Method and device for accelerating layer1 in C-RAN Download PDF

Info

Publication number
CN110838990A
CN110838990A CN201810941435.4A CN201810941435A CN110838990A CN 110838990 A CN110838990 A CN 110838990A CN 201810941435 A CN201810941435 A CN 201810941435A CN 110838990 A CN110838990 A CN 110838990A
Authority
CN
China
Prior art keywords
layer1
acceleration
execution result
subtask
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810941435.4A
Other languages
Chinese (zh)
Inventor
王碧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Shanghai Bell Co Ltd
Alcatel Lucent SAS
Original Assignee
Alcatel Lucent SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alcatel Lucent SAS filed Critical Alcatel Lucent SAS
Priority to CN201810941435.4A priority Critical patent/CN110838990A/en
Priority to PCT/CN2019/100937 priority patent/WO2020035043A1/en
Publication of CN110838990A publication Critical patent/CN110838990A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6295Queue scheduling characterised by scheduling criteria using multiple queues, one for each individual QoS, connection, flow or priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6215Individual queue per QOS, rate or priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/02Power saving arrangements
    • H04W52/0203Power saving arrangements in the radio access network or backbone network of wireless communication networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W8/00Network data management
    • H04W8/02Processing of mobility data, e.g. registration information at HLR [Home Location Register] or VLR [Visitor Location Register]; Transfer of mobility data, e.g. between HLR, VLR or external networks
    • H04W8/04Registration at HLR or HSS [Home Subscriber Server]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention aims to provide a method and equipment for accelerating a layer1 in a C-RAN. Compared with the prior art, the method and the device receive the layer1 accelerated cloud task sent by the user, divide the layer1 accelerated cloud task into at least one subtask, allocate the at least one subtask to different queues according to priority setting, trigger at least one working device to obtain and execute the corresponding subtask from the at least one queue according to work configuration, obtain an execution result, output the execution result to a next-level target, accelerate the processing of the layer1 in a uniform mode, and are cloud, flexible and economical for the layer1 processing.

Description

Method and device for accelerating layer1 in C-RAN
Technical Field
The present invention relates to the field of technologies, and in particular, to a technology for accelerating a layer1 in a C-RAN. .
Background
For C-RAN (Cloud-Radio Access Network), the goal is to move all baseband processing into a Cloud computing environment. Parts such as control plane (cpp), management plane (managementplane) are easily moved to the cloud computing environment because they are not time critical and do not have special operations.
However, for Layer1user plane (Layer1user plane), operations such as FFT (fast fourier transform)/iFFT (inverse fast fourier transform) and the like are usually accelerated by using a dedicated SOC (System on chip)/DSP (digital signal processor) chip in a conventional base station due to a large amount of calculation. On general purpose processors such as X86, layer1user plane data operations cannot be performed efficiently without a corresponding acceleration instruction.
In the existing C-RAN, the layer1user plane is still located in a RAP (Radio Access Point), and a DSP chip is applied to accelerate the processing of the layer 1. However, the RAP is dedicated to a specific antenna and a specific VNF (virtual network function), which cannot be shared by all users, nor capacity scaling. Layer1 processing is not clouded.
The prior art currently has some ideas to accelerate layer1 in a cloud environment, such as:
optimizing the code by adopting a SIMD (Single Instruction Multiple Data) Instruction in the Xeon CPU;
pure software of Xeon Phi is adopted for layer1 calculation;
an acceleration board card (acceleration board) is manufactured by adopting a DSP chip, and a PCIe (peripheral component Interface Express) Interface is adopted to be connected to the cloud server.
However, these ideas have drawbacks, both in terms of efficiency and flexibility.
Disclosure of Invention
The invention aims to provide a method and equipment for accelerating a layer1 in a C-RAN.
According to an aspect of the present invention, there is provided a method of accelerating a layer1 in a C-RAN, wherein the method comprises:
receiving a layer1 acceleration cloud task sent by a user;
splitting the layer1 acceleration cloud task into at least one subtask, and distributing the at least one subtask to different queues according to priority setting;
triggering at least one working device to obtain corresponding subtasks from at least one queue according to the working configuration and executing the subtasks to obtain an execution result;
and outputting the execution result to a next-level target.
Preferably, the at least one worker is obtained by integrating a System On Chip (SOC) and/or a Digital Signal Processor (DSP) chip.
Preferably, the next level of objective comprises at least any one of:
an antenna;
an antenna data converter;
a Radio Remote Unit (RRU);
a receiver for monitoring the execution result;
at least one of the queues.
More preferably, the next-level target includes at least one of the queues, the obtained execution result includes at least one new task, and the step of outputting the execution result to the next-level target includes:
assigning the at least one new task to at least one of the queues.
Preferably, the method further comprises:
and analyzing the data packet corresponding to the layer1 acceleration cloud task, and determining the priority setting.
Preferably, the method further comprises:
and adjusting the execution support of the at least one worker on the queue according to the distribution condition of the sub-tasks in the queue and the load condition of the at least one worker.
Preferably, the method further comprises:
and acquiring the load condition of the at least one working device, and if the load of the working device is lower than a preset threshold value, sleeping or closing the working device.
According to another aspect of the present invention, there is also provided an acceleration apparatus for accelerating a layer1 in a C-RAN, wherein the acceleration apparatus includes:
the receiving device is used for receiving the layer1 acceleration cloud task sent by the user;
the distribution device is used for splitting the layer1 acceleration cloud task into at least one subtask and distributing the at least one subtask to different queues according to priority setting;
the triggering device is used for triggering at least one working device to acquire and execute corresponding subtasks from at least one queue according to the working configuration to acquire an execution result;
and the output device is used for outputting the execution result to a next-stage target.
Preferably, the at least one worker is obtained by integrating a System On Chip (SOC) and/or a Digital Signal Processor (DSP) chip.
Preferably, the next level of objective comprises at least any one of:
an antenna;
an antenna data converter;
a Radio Remote Unit (RRU);
a receiver for monitoring the execution result;
at least one of the queues.
More preferably, the next level of goal comprises at least one of the queues, the obtained execution result comprises at least one new task, and the output device is configured to:
assigning the at least one new task to at least one of the queues.
Preferably, the acceleration device further comprises:
and the analyzing device is used for analyzing the data packet corresponding to the layer1 acceleration cloud task and determining the priority setting.
Preferably, the acceleration device further comprises:
and the adjusting device is used for adjusting the execution support of the at least one working device on the queue according to the distribution condition of the subtasks in the queue and in combination with the load condition of the at least one working device.
Preferably, the acceleration device further comprises:
and the judging device is used for acquiring the load condition of the at least one working device, and if the load of the working device is lower than a preset threshold value, the working device is dormant or closed.
Compared with the prior art, the method and the device receive the layer1 accelerated cloud task sent by the user, divide the layer1 accelerated cloud task into at least one subtask, allocate the at least one subtask to different queues according to priority setting, trigger at least one working device to obtain and execute the corresponding subtask from the at least one queue according to work configuration, obtain an execution result, output the execution result to a next-level target, accelerate the processing of the layer1 in a uniform mode, and are cloud, flexible and economical for the layer1 processing.
Furthermore, the invention integrates the existing SOC/DSP chips together to be used as a heterogeneous cloud environment, and a layer1 processing resource pool can be established.
To the user, which provides a unified stateless layer1 processing service interface, the layer1 processing can be divided into a number of small stateless jobs (jobs), such as CRC (Cyclic Redundancy Check), encoding, decoding, iFFT (inverse fast fourier transform), FFT (fast fourier transform), etc.
For the antenna data, some converters, such as Common Public Radio Interface (CPRI) converters, etc., are introduced to distribute the antenna data to different RRUs (Remote Radio units)
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments made with reference to the following drawings:
fig. 1 illustrates a flow diagram of a method of accelerating layer1 in a C-RAN in accordance with an aspect of the present invention;
fig. 2 shows an architecture diagram for accelerating layer1 in a C-RAN according to a preferred embodiment of the present invention;
fig. 3 to 6 show a schematic illustration of acceleration of layer1 in a C-RAN according to another preferred embodiment of the invention;
fig. 7 shows a schematic diagram of accelerating a layer1 in a C-RAN according to yet another preferred embodiment of the present invention;
fig. 8 shows a schematic diagram of accelerating a layer1 in a C-RAN according to yet another preferred embodiment of the present invention;
fig. 9 shows a schematic diagram of accelerating a layer1 in a C-RAN according to yet another preferred embodiment of the present invention.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present invention is described in further detail below with reference to the attached drawing figures.
The term "base station" as used herein may be considered synonymous with, and sometimes referred to hereinafter as: a node B, an evolved node B, an eNodeB, an eNB, a Base Transceiver Station (BTS), an RNC, etc., and may describe a transceiver that communicates with a mobile terminal and provides radio resources for it in a wireless communication network that may span multiple technology generations. The base stations discussed herein may have all of the functionality associated with conventional well-known base stations, except for the ability to implement the methods discussed herein.
The methods discussed below may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine or computer readable medium such as a storage medium. The processor(s) may perform the necessary tasks.
Specific structural and functional details disclosed herein are merely representative and are provided for purposes of describing example embodiments of the present invention. The present invention may, however, be embodied in many alternate forms and should not be construed as limited to only the embodiments set forth herein. It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element may be termed a second element, and, similarly, a second element may be termed a first element, without departing from the scope of example embodiments. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being "directly connected" or "directly coupled" to another element, there are no intervening elements present. Other words used to describe the relationship between elements (e.g., "between" versus "directly between", "adjacent" versus "directly adjacent to", etc.) should be interpreted in a similar manner.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be noted that, in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Fig. 1 shows a flow diagram of a method of accelerating a layer1 in a C-RAN according to an aspect of the invention.
The method comprises steps S101, S102, S103 and S104.
In step S101, the acceleration device 1 receives a layer1 acceleration cloud task transmitted by a user.
Specifically, the user sends the layer1 acceleration cloud task through, for example, the job portal shown in fig. 2, where fig. 2 shows an architecture diagram for accelerating layer1 in the C-RAN according to a preferred embodiment of the present invention. In the architecture, a job entry, N queues and N workers corresponding to the N queues are included, and the output of the workers may be directed to the antenna, antenna data converter, receiver or still be re-queued.
In step S101, the acceleration device 1 receives the layer1 acceleration cloud task sent by the user through an appointed communication method. The layer1 accelerated cloud task may be a simple task such as attaching (cyclic redundancy Check) CRC (cyclic redundancy Check) or a large task such as including all processes from attaching CRC to OFDM (Orthogonal Frequency Division Multiplexing) signal generation. The layer1 acceleration cloud task may also include encoding, decoding, iFFT (inverse fast fourier transform), FFT (fast fourier transform), and other operations. The layer1 accelerated cloud task may be, for example, an execution result, a packet, etc. from layer 2.
It should be understood by those skilled in the art that the above-described layer1 accelerated cloud task is only for illustration and not intended to limit the present invention, and other layer1 accelerated cloud tasks that may be present or later come into existence, such as may be applicable to the present invention, are also included within the scope of the present invention and are hereby incorporated by reference.
In step S102, the acceleration device 1 splits the layer1 acceleration cloud task into at least one subtask, and allocates the at least one subtask to different queues according to the priority setting.
Specifically, the acceleration apparatus 1 splits the layer1 acceleration cloud task received from the user into at least one sub-task, for example, splits the layer1 acceleration cloud task into a plurality of small stateless jobs (jobs), such as into sub-tasks of CRC (cyclic redundancy Check), encoding, decoding, iFFT (inverse fast fourier transform), FFT (fast fourier transform), and the like.
Here, stateless means that there is no dependency on the sequence of the previous and next operations, and it does not depend on whether the user establishes a context, and the layer1 operation may be independent of the user, and only related to the encoding, the resource mapping, and the description in the layer1 acceleration cloud task.
The acceleration arrangement 1 then allocates the at least one subtask to a different queue according to the priority setting. For example, assuming that the layer1 acceleration cloud task received by the acceleration apparatus 1 from the job entry is split into 6 sub-tasks, the acceleration apparatus 1 may allocate the 6 sub-tasks to 6 different queues according to the priority setting, or may allocate the 6 sub-tasks to 4 different queues according to the priority setting, where 2 queues have 2 sub-tasks respectively. Here, one queue may be allocated with a plurality of subtasks, or may have only one subtask.
Here, the acceleration device 1 allocates the high-priority subtask to the high-priority queue, allocates the high-priority subtask to the high-performance queue of the corresponding work implement, allocates the high-priority subtask to the queue having a large number of corresponding work implements, or fully maps the high-priority subtask.
It will be appreciated by those skilled in the art that the foregoing sub-tasks and distribution are exemplary and not limiting, and that other sub-tasks and distribution possibilities now or later developed, such as may be suitable for use with the present invention, are also within the scope of the present invention and are hereby incorporated by reference.
Preferably, the method further comprises step S105 (not shown). In step S105, the acceleration device 1 analyzes the data packet corresponding to the layer1 acceleration cloud task, and determines the priority setting. Subsequently, in step S102, the acceleration device 1 allocates the at least one subtask to a different queue again according to the priority setting determined in the step S105.
Specifically, in step S105, the acceleration device 1 parses a data packet corresponding to the layer1 acceleration cloud task, the priority setting of the layer1 acceleration cloud task is defined in the description file of the data packet, and the acceleration device 1 determines the priority of each sub-task corresponding to the layer1 acceleration cloud task by parsing the data packet, that is, determines which sub-tasks are allocated to which queues.
For example, the layer1 acceleration cloud task includes a packet from the layer 2 as an execution result, and in step S105, the acceleration apparatus 1 parses the packet corresponding to the layer1 acceleration cloud task, and obtains the setting information about the priority through the description file of the packet.
For example, assume that in step S101, the acceleration device 1 receives a layer1 acceleration cloud task from the job portal, where the layer1 acceleration cloud task includes, for example, 6 subtasks; in step S105, the acceleration device 1 parses the data packet corresponding to the layer1 acceleration cloud task, and determines a relevant priority setting, for example, the priority setting indicates that the priorities of the 6 subtasks included in the layer1 acceleration cloud task are consistent and are not high or low; in step S102, the acceleration apparatus 1 splits the layer1 acceleration cloud task into 6 subtasks, and then allocates the 6 subtasks to 6 different queues according to the priority setting, where the 6 different queues may have the same priority.
In step S103, the acceleration apparatus 1 triggers at least one working device to obtain and execute a corresponding sub-task from at least one queue according to the working configuration, so as to obtain an execution result.
Specifically, in step S103, the acceleration apparatus 1 triggers at least one worker to obtain and execute a corresponding sub-task from at least one queue according to the work configuration, for example, the work configuration of the worker 1 obtains and executes a task from queues 1, 2, and 3, and in step S103, the acceleration apparatus 1 triggers the worker 1 to obtain and execute a corresponding sub-task from queues 1, 2, and 3 according to the work configuration, and obtains an execution result. Here, a worker may be configured to retrieve tasks from one or more queues.
The queues may be corresponding to different priorities, for example, the priorities of the queues 1, 2 and 3 are sequentially arranged from high to low, and the worker 1 may also be configured to sequentially obtain the corresponding subtasks from the queues 1, 2 and 3 according to the priorities.
Preferably, the at least one worker is obtained by integrating a System On Chip (SOC) and/or a Digital Signal Processor (DSP) chip.
Here, by integrating the existing SOC and/or DSP chips together, as a heterogeneous cloud environment, a layer1 processing resource pool may be established, and from the user plane, it is not necessary to distinguish the SOC or DSP chips, but only the hardware may be regarded as individual workers, and in step S103, the acceleration apparatus 1 triggers at least one of the workers, and obtains a corresponding sub-task from at least one queue according to the work configuration and executes the sub-task, so as to obtain an execution result.
In step S104, the accelerator apparatus 1 outputs the execution result to the next-stage target.
Specifically, in step S104, the acceleration device 1 outputs the execution result to the next-stage target, or triggers each of the above-mentioned working devices to output the execution result to the next-stage target, for example, outputs the output result of the working device to an antenna or an antenna data converter, as shown in working devices 1 and 2 in fig. 2, outputs the execution result to the antenna data converter, and then outputs the execution result to the antenna by the antenna data converter; alternatively, the output result of the operator is output to a receiver for monitoring the execution result, as shown in the operator N in fig. 2, which may output the execution result to the antenna or output the execution result to the receiver.
Preferably, the next level of objective comprises at least any one of:
an antenna;
an antenna data converter;
a Radio Remote Unit (RRU);
a receiver for monitoring the execution result;
at least one of the queues.
It will be understood by those skilled in the art that the foregoing further objects are merely illustrative and not restrictive of the present invention, and that other objects, now known or later developed, which may be suitable for use in the present invention are also included within the scope of the present invention and are hereby incorporated by reference.
Fig. 3 to 6 show a schematic illustration of acceleration of layer1 in a C-RAN according to another preferred embodiment of the invention.
In fig. 3, the layer1 acceleration cloud task sent by the user includes 6 subtasks.
In fig. 4, the layer1 accelerated cloud task is split into 6 subtasks, where subtasks 1 and 2 are assigned to queue 1, subtask 3 is assigned to queue 2, subtask 4 is assigned to queue 3, and subtasks 5 and 6 are assigned to queue N.
In fig. 5, a worker 1 obtains a subtask 1 from a queue 1 according to a work configuration and executes the subtask 1; and the worker 2 acquires and executes the subtask 3 from the queue 2 according to the work configuration, and the worker N acquires and executes the subtask 5 from the queue N.
In fig. 6, a worker 1 outputs an execution result 1 obtained by executing a subtask 1 to an antenna via an antenna data converter, and the worker 1 continues to obtain a subtask 2 from a queue 1 and execute the subtask 1; the worker 2 outputs an execution result 3 obtained by executing the subtask 3 to another antenna via the antenna data converter, and the worker 2 continues to obtain the subtask 4 from the queue 3 and execute the subtask; the worker N outputs the execution result 5 obtained by executing the subtask 5 to another antenna, and the worker N continues to obtain the subtask 6 from the queue N and execute it.
Preferably, when the next-level target includes at least one of the queues, the obtained execution result includes generating at least one new task, and in step S104, the acceleration apparatus 1 allocates the at least one new task to at least one of the queues.
Specifically, in step S103, the worker obtains the corresponding sub-task from the queue and executes the sub-task, and the obtained execution result may be that at least one new task is generated again; in step S104, the acceleration apparatus 1 assigns the at least one new task to at least one of the queues, and at least one of the workers retrieves the new task from the queue and executes the new task.
For example, as shown in fig. 7, the worker 1 acquires the sub-task 2 from the queue 1 and executes the sub-task, and the acquired execution result is a new task 2 ', and the new task 2' is further allocated to the queue 1, and then the new task 2 'may be acquired and executed by one of the workers, and of course, the new task 2' may also be allocated to another queue, for example, also may be allocated to the queue 2; the other workers, such as the worker 2, obtain the subtask 4 from the queue 3 and execute the subtask 4, and output the execution result 4 to an antenna through the antenna data converter; the worker N acquires the subtask 6 from the queue N and executes the subtask 6, and outputs the execution result 6 to a receiver which listens to the execution result.
Preferably, the method further comprises step S106 (not shown). In step S106, the acceleration apparatus 1 adjusts, according to the distribution condition of the subtasks in the queue and in combination with the load condition of the at least one worker, the execution support of the queue by the at least one worker.
Specifically, in step S106, the acceleration device 1 may obtain the allocation status of the subtasks in each queue and the load status of each worker, for example, obtain how many subtasks are allocated to each queue, the priority of each queue, and which queue each worker needs to go to obtain a subtask according to the work configuration and execute the subtask, and the acceleration device 1 may adjust the execution support of each worker on the queue according to the allocation status of the subtasks in the queues and the load status of each worker, for example, when a certain queue is allocated with a plurality of subtasks and a certain worker is in an idle state, the acceleration device 1 may trigger the worker to obtain the subtasks in the queue and execute the subtasks, thereby relieving the load pressure of other workers and balancing the loads of the workers.
For example, as shown in FIG. 8, all of the subtasks 1 to 6 are assigned to queue 2, and depending on the load of the respective worker, workers 1 and 2 are configured to fetch the subtasks from the queue 2 and execute them.
Preferably, the method further comprises step S107 (not shown). In step S107, the acceleration device 1 obtains the load condition of the at least one working device, and if the load of the working device is lower than a predetermined threshold, the working device is put to sleep or shut down.
Specifically, in step S107, the acceleration device 1 may obtain the load conditions of each of the workers, and if the load of one of the workers is lower than a predetermined threshold, the acceleration device 1 may sleep or turn off the worker. The predetermined threshold may be preset by the system to determine the load condition of the working device, or may be adjusted according to the actual condition.
For example, as shown in fig. 9, when the load of the worker N is lower than a predetermined threshold, the worker N is put to sleep or turned off.
According to another aspect of the present invention, there is also provided an acceleration apparatus for accelerating a layer1 in a C-RAN. Wherein, the acceleration apparatus 1 comprises a receiving apparatus 201 (not shown) for receiving the layer1 acceleration cloud task sent by the user; an allocating device 202 (not shown) configured to split the layer1 accelerated cloud task into at least one sub-task, and allocate the at least one sub-task to different queues according to a priority setting; a triggering device 203 (not shown) for triggering at least one worker to obtain and execute a corresponding subtask from at least one queue according to the work configuration, so as to obtain an execution result; an output device 204 (not shown) for outputting the execution result to the next stage target.
The receiving device 201 receives the layer1 accelerated cloud task sent by the user.
Specifically, the user sends the layer1 acceleration cloud task through, for example, the job portal shown in fig. 2, where fig. 2 shows an architecture diagram for accelerating layer1 in the C-RAN according to a preferred embodiment of the present invention. In the architecture, a job entry, N queues and N workers corresponding to the N queues are included, and the output of the workers may be directed to the antenna, antenna data converter, receiver or still be re-queued.
The receiving device 201 receives the layer1 accelerated cloud task sent by the user through an agreed communication mode. The layer1 accelerated cloud task may be a simple task such as attaching (Cyclic Redundancy Check) CRC (Cyclic Redundancy Check) or a large task such as including all processes from attaching CRC to OFDM (orthogonal frequency Division Multiplexing) signal generation. The layer1 acceleration cloud task may also include encoding, decoding, iFFT (inverse fast fourier transform), FFT (fast fourier transform), and other operations. The layer1 accelerated cloud task may be, for example, an execution result, a packet, etc. from layer 2.
It should be understood by those skilled in the art that the above-described layer1 accelerated cloud task is only for illustration and not intended to limit the present invention, and other layer1 accelerated cloud tasks that may be present or later come into existence, such as may be applicable to the present invention, are also included within the scope of the present invention and are hereby incorporated by reference.
The allocating device 202 splits the layer1 acceleration cloud task into at least one subtask, and allocates the at least one subtask to different queues according to the priority setting.
Specifically, the allocating device 202 splits the layer1 accelerated cloud task received from the user into at least one sub-task, for example, splits the layer1 accelerated cloud task into a plurality of small stateless jobs (jobs), such as into sub-tasks of CRC (cyclic redundancy Check), encoding, decoding, iFFT (inverse fast fourier transform), FFT (fast fourier transform), and the like.
Here, stateless means that there is no dependency on the sequence of the previous and next operations, and it does not depend on whether the user establishes a context, and the layer1 operation may be independent of the user, and only related to the encoding, the resource mapping, and the description in the layer1 acceleration cloud task.
Subsequently, the allocating device 202 allocates the at least one subtask to a different queue according to the priority setting. For example, assuming that the layer1 accelerated cloud task received by the receiving device 201 from the job entry is split into 6 sub-tasks, the allocating device 202 may allocate the 6 sub-tasks to 6 different queues according to the priority setting, or may allocate the 6 sub-tasks to 4 different queues according to the priority setting, where 2 queues have 2 sub-tasks respectively. Here, one queue may be allocated with a plurality of subtasks, or may have only one subtask.
Here, the allocating device 202 allocates the high-priority subtask to the high-priority queue, allocates the high-priority subtask to the high-performance queue of the corresponding work implement, allocates the high-priority subtask to the queue having a large number of corresponding work implements, or fully maps the high-priority subtask.
It will be appreciated by those skilled in the art that the foregoing sub-tasks and distribution are exemplary and not limiting, and that other sub-tasks and distribution possibilities now or later developed, such as may be suitable for use with the present invention, are also within the scope of the present invention and are hereby incorporated by reference.
Preferably, the acceleration device 1 further comprises a resolving device 205 (not shown). The parsing device 205 parses the data packet corresponding to the layer1 acceleration cloud task, and determines the priority setting. Subsequently, the allocating means 202 allocates the at least one sub-task to different queues according to the priority setting determined by the parsing means 205.
Specifically, the parsing device 205 parses a data packet corresponding to the layer1 accelerated cloud task, a priority setting of the layer1 accelerated cloud task is defined in a description file of the data packet, and the parsing device 205 determines priorities of sub-tasks corresponding to the layer1 accelerated cloud task by parsing the data packet, that is, determines which sub-tasks are allocated to which queues.
For example, the layer1 accelerated cloud task includes a packet from the layer 2 as an execution result, the parsing device 205 parses the packet corresponding to the layer1 accelerated cloud task, and obtains the setting information about the priority through the description file of the packet.
For example, assume that the receiving device 201 receives a layer1 accelerated cloud task from a job portal, the layer1 accelerated cloud task including, for example, 6 subtasks; the parsing device 205 parses the data packet corresponding to the layer1 accelerated cloud task, and determines a relevant priority setting, for example, the priority setting indicates that the priorities of 6 subtasks included in the layer1 accelerated cloud task are consistent and do not have a high or low score; the allocating device 202 splits the layer1 accelerated cloud task into 6 subtasks, and then allocates the 6 subtasks to 6 different queues according to the priority setting, where the 6 different queues may have the same priority.
The triggering device 203 triggers at least one worker to acquire and execute the corresponding subtask from at least one queue according to the work configuration, and an execution result is obtained.
Specifically, the triggering device 203 triggers at least one worker to acquire and execute a corresponding subtask from at least one queue according to the work configuration, for example, the work configuration of the worker 1 is to acquire a task from the queues 1, 2, and 3, and the triggering device 203 triggers the worker 1 to acquire and execute a corresponding subtask from the queues 1, 2, and 3 according to the work configuration, so as to obtain an execution result. Here, a worker may be configured to retrieve tasks from one or more queues.
The queues may be corresponding to different priorities, for example, the priorities of the queues 1, 2 and 3 are sequentially arranged from high to low, and the worker 1 may also be configured to sequentially obtain the corresponding subtasks from the queues 1, 2 and 3 according to the priorities.
Preferably, the at least one worker is obtained by integrating a System On Chip (SOC) and/or a Digital Signal Processor (DSP) chip.
Here, by integrating the existing SOC and/or DSP chips together, as a heterogeneous cloud environment, a layer1 processing resource pool may be established, and from the user plane, it is not necessary to distinguish between the SOC and DSP chips, but only the hardware may be regarded as individual workers, and the triggering device 203 triggers at least one of the workers, and obtains the corresponding sub-task from at least one queue according to the work configuration and executes the sub-task, so as to obtain the execution result.
The output device 204 outputs the execution result to the next stage target.
Specifically, the output device 204 outputs the execution result to the next-stage target, or triggers each of the above-mentioned working devices to output the execution result to the next-stage target, for example, the output result of the working device is output to an antenna or an antenna data converter, as shown in working devices 1 and 2 in fig. 2, the execution result is output to the antenna data converter, and then the antenna data converter outputs the execution result to the antenna; alternatively, the output result of the operator is output to a receiver for monitoring the execution result, as shown in the operator N in fig. 2, which may output the execution result to the antenna or output the execution result to the receiver.
Preferably, the next level of objective comprises at least any one of:
an antenna;
an antenna data converter;
a Radio Remote Unit (RRU);
a receiver for monitoring the execution result;
at least one of the queues.
It will be understood by those skilled in the art that the foregoing further objects are merely illustrative and not restrictive of the present invention, and that other objects, now known or later developed, which may be suitable for use in the present invention are also included within the scope of the present invention and are hereby incorporated by reference.
Fig. 3 to 6 show a schematic illustration of acceleration of layer1 in a C-RAN according to another preferred embodiment of the invention.
In fig. 3, the layer1 acceleration cloud task sent by the user includes 6 subtasks.
In fig. 4, the layer1 accelerated cloud task is split into 6 subtasks, where subtasks 1 and 2 are assigned to queue 1, subtask 3 is assigned to queue 2, subtask 4 is assigned to queue 3, and subtasks 5 and 6 are assigned to queue N.
In fig. 5, a worker 1 obtains a subtask 1 from a queue 1 according to a work configuration and executes the subtask 1; and the worker 2 acquires and executes the subtask 3 from the queue 2 according to the work configuration, and the worker N acquires and executes the subtask 5 from the queue N.
In fig. 6, a worker 1 outputs an execution result 1 obtained by executing a subtask 1 to an antenna via an antenna data converter, and the worker 1 continues to obtain a subtask 2 from a queue 1 and execute the subtask 1; the worker 2 outputs an execution result 3 obtained by executing the subtask 3 to another antenna via the antenna data converter, and the worker 2 continues to obtain the subtask 4 from the queue 3 and execute the subtask; the worker N outputs the execution result 5 obtained by executing the subtask 5 to another antenna, and the worker N continues to obtain the subtask 6 from the queue N and execute it.
Preferably, when the next-level target includes at least one of the queues, the obtained execution result includes at least one new task, and the output device 204 allocates the at least one new task to the at least one of the queues.
Specifically, the triggering device 203 triggers the worker to obtain a corresponding subtask from the queue and execute the subtask, and the obtained execution result may be that at least one new task is generated again; the output device 204 assigns the at least one new task to at least one of the queues, and at least one of the workers retrieves the new task from the queue and executes the new task.
For example, as shown in fig. 7, the worker 1 acquires the sub-task 2 from the queue 1 and executes the sub-task, and the acquired execution result is a new task 2 ', and the new task 2' is further allocated to the queue 1, and then the new task 2 'may be acquired and executed by one of the workers, and of course, the new task 2' may also be allocated to another queue, for example, also may be allocated to the queue 2; the other workers, such as the worker 2, obtain the subtask 4 from the queue 3 and execute the subtask 4, and output the execution result 4 to an antenna through the antenna data converter; the worker N acquires the subtask 6 from the queue N and executes the subtask 6, and outputs the execution result 6 to a receiver which listens to the execution result.
Preferably, the accelerating device 1 further comprises an adjusting device 206 (not shown). The adjusting device 206 adjusts the execution support of the queue by the at least one worker according to the distribution condition of the sub-tasks in the queue and the load condition of the at least one worker.
Specifically, the adjusting device 206 may obtain the allocation status of the subtasks in each queue and the load status of each worker, for example, obtain how many subtasks are allocated to each queue, the priority of each queue, which queues each worker needs to go to obtain a subtask according to the work configuration and execute the subtask, and the adjusting device 206 adjusts the execution support of each worker on the queue according to the allocation status of the subtasks in the queues and the load status of each worker, for example, when a queue is allocated with a plurality of subtasks and a worker is in an idle state, the adjusting device 206 may trigger the worker to obtain the subtasks in the queue and execute the subtasks, thereby relieving the load pressure of other workers and balancing the load of each worker.
For example, as shown in FIG. 8, all of the subtasks 1 to 6 are assigned to queue 2, and depending on the load of the respective worker, workers 1 and 2 are configured to fetch the subtasks from the queue 2 and execute them.
Preferably, the acceleration device 1 further comprises a judgment device 207 (not shown). The determining device 207 obtains the load condition of the at least one working device, and sleeps or shuts down the working device if the load of the working device is lower than a predetermined threshold.
Specifically, the determining device 207 may obtain the load condition of each of the workers, and if the load of one of the workers is lower than a predetermined threshold, the determining device 207 may sleep or shut down the worker. The predetermined threshold may be preset by the system to determine the load condition of the working device, or may be adjusted according to the actual condition.
For example, as shown in fig. 9, when the load of the worker N is lower than a predetermined threshold, the worker N is put to sleep or turned off.
It should be noted that the present invention may be implemented in software and/or in a combination of software and hardware, for example, as an Application Specific Integrated Circuit (ASIC), a general purpose computer or any other similar hardware device. In one embodiment, the software program of the present invention may be executed by a processor to implement the steps or functions described above. Also, the software programs (including associated data structures) of the present invention can be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Further, some of the steps or functions of the present invention may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present invention can be applied as a computer program product, such as computer program instructions, which when executed by a computer, can invoke or provide the method and/or technical solution according to the present invention through the operation of the computer. Program instructions which invoke the methods of the present invention may be stored on a fixed or removable recording medium and/or transmitted via a data stream on a broadcast or other signal-bearing medium and/or stored within a working memory of a computer device operating in accordance with the program instructions. An embodiment according to the invention herein comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or solution according to embodiments of the invention as described above.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (14)

1. A method of accelerating layer1 in a C-RAN, wherein the method comprises:
receiving a layer1 acceleration cloud task sent by a user;
splitting the layer1 acceleration cloud task into at least one subtask, and distributing the at least one subtask to different queues according to priority setting;
triggering at least one working device to obtain corresponding subtasks from at least one queue according to the working configuration and executing the subtasks to obtain an execution result;
and outputting the execution result to a next-level target.
2. The method of claim 1, wherein the at least one worker is obtained by integrating a system-on-a-chip (SOC) and/or a Digital Signal Processor (DSP) chip.
3. The method of claim 1 or 2, wherein the next stage objective comprises at least any one of:
an antenna;
an antenna data converter;
a Radio Remote Unit (RRU);
a receiver for monitoring the execution result;
at least one of the queues.
4. The method of claim 3, wherein the next level objective includes at least one of the queues, the obtaining the execution result includes generating at least one new task, and the outputting the execution result to the next level objective includes:
assigning the at least one new task to at least one of the queues.
5. The method of any of claims 1 to 4, wherein the method further comprises:
and analyzing the data packet corresponding to the layer1 acceleration cloud task, and determining the priority setting.
6. The method of any of claims 1 to 5, wherein the method further comprises:
and adjusting the execution support of the at least one worker on the queue according to the distribution condition of the sub-tasks in the queue and the load condition of the at least one worker.
7. The method of any of claims 1 to 6, wherein the method further comprises:
and acquiring the load condition of the at least one working device, and if the load of the working device is lower than a preset threshold value, sleeping or closing the working device.
8. An acceleration arrangement for accelerating a layer1 in a C-RAN, wherein the acceleration arrangement comprises:
the receiving device is used for receiving the layer1 acceleration cloud task sent by the user;
the distribution device is used for splitting the layer1 acceleration cloud task into at least one subtask and distributing the at least one subtask to different queues according to priority setting;
the triggering device is used for triggering at least one working device to acquire and execute corresponding subtasks from at least one queue according to the working configuration to acquire an execution result;
and the output device is used for outputting the execution result to a next-stage target.
9. The acceleration device of claim 8, wherein the at least one worker is obtained by integrating a system-on-chip (SOC) and/or a Digital Signal Processor (DSP) chip.
10. An accelerating device as in claim 8 or 9, wherein the next level objective comprises at least any one of:
an antenna;
an antenna data converter;
a Radio Remote Unit (RRU);
a receiver for monitoring the execution result;
at least one of the queues.
11. An acceleration apparatus according to claim 10, wherein the next level objective comprises at least one of the queues, the obtained execution result comprises generating at least one new task, the output means is configured to:
assigning the at least one new task to at least one of the queues.
12. An accelerating device as in any of claims 8-11, further comprising:
and the analyzing device is used for analyzing the data packet corresponding to the layer1 acceleration cloud task and determining the priority setting.
13. An accelerating device as in any of claims 8-12, further comprising:
and the adjusting device is used for adjusting the execution support of the at least one working device on the queue according to the distribution condition of the subtasks in the queue and in combination with the load condition of the at least one working device.
14. An accelerating device as in any of claims 8-13, further comprising:
and the judging device is used for acquiring the load condition of the at least one working device, and if the load of the working device is lower than a preset threshold value, the working device is dormant or closed.
CN201810941435.4A 2018-08-17 2018-08-17 Method and device for accelerating layer1 in C-RAN Pending CN110838990A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810941435.4A CN110838990A (en) 2018-08-17 2018-08-17 Method and device for accelerating layer1 in C-RAN
PCT/CN2019/100937 WO2020035043A1 (en) 2018-08-17 2019-08-16 A method and apparatus for layer 1 acceleration in c-ran

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810941435.4A CN110838990A (en) 2018-08-17 2018-08-17 Method and device for accelerating layer1 in C-RAN

Publications (1)

Publication Number Publication Date
CN110838990A true CN110838990A (en) 2020-02-25

Family

ID=69525198

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810941435.4A Pending CN110838990A (en) 2018-08-17 2018-08-17 Method and device for accelerating layer1 in C-RAN

Country Status (2)

Country Link
CN (1) CN110838990A (en)
WO (1) WO2020035043A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112611016A (en) * 2020-11-23 2021-04-06 青岛海尔空调电子有限公司 Multi-split multi-split outdoor unit communication method and multi-split unit

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112948124B (en) * 2021-03-26 2023-09-22 浪潮电子信息产业股份有限公司 Acceleration task processing method, device, equipment and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567086A (en) * 2010-12-30 2012-07-11 ***通信集团公司 Task scheduling method, equipment and system
CN103501498A (en) * 2013-08-29 2014-01-08 中国科学院声学研究所 Baseband processing resource allocation method and device thereof
CN103945548A (en) * 2014-04-29 2014-07-23 西安电子科技大学 Resource distribution system and task/service scheduling method in C-RAN
CN104123185A (en) * 2013-04-28 2014-10-29 ***通信集团公司 Resource scheduling method, device and system
CN104540234A (en) * 2015-01-19 2015-04-22 西安电子科技大学 Associated task scheduling mechanism based on CoMP synchronization constraint in C-RAN framework
CN105517176A (en) * 2015-12-03 2016-04-20 中国科学院计算技术研究所 Method for dynamic scheduling of resources of virtualized base station

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10095526B2 (en) * 2012-10-12 2018-10-09 Nvidia Corporation Technique for improving performance in multi-threaded processing units
CN104571042B (en) * 2014-12-31 2018-04-20 深圳市进林科技有限公司 The control method of finished and entire car controller of intelligent automobile
CN105224393B (en) * 2015-10-15 2018-10-09 西安电子科技大学 A kind of scheduling virtual machine mechanism of JT-CoMP under C-RAN frameworks
CN106572500B (en) * 2016-10-21 2020-07-28 同济大学 Scheduling method of hardware accelerator in C-RAN

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567086A (en) * 2010-12-30 2012-07-11 ***通信集团公司 Task scheduling method, equipment and system
CN104123185A (en) * 2013-04-28 2014-10-29 ***通信集团公司 Resource scheduling method, device and system
CN103501498A (en) * 2013-08-29 2014-01-08 中国科学院声学研究所 Baseband processing resource allocation method and device thereof
CN103945548A (en) * 2014-04-29 2014-07-23 西安电子科技大学 Resource distribution system and task/service scheduling method in C-RAN
CN104540234A (en) * 2015-01-19 2015-04-22 西安电子科技大学 Associated task scheduling mechanism based on CoMP synchronization constraint in C-RAN framework
CN105517176A (en) * 2015-12-03 2016-04-20 中国科学院计算技术研究所 Method for dynamic scheduling of resources of virtualized base station

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112611016A (en) * 2020-11-23 2021-04-06 青岛海尔空调电子有限公司 Multi-split multi-split outdoor unit communication method and multi-split unit

Also Published As

Publication number Publication date
WO2020035043A1 (en) 2020-02-20

Similar Documents

Publication Publication Date Title
US20200137151A1 (en) Load balancing engine, client, distributed computing system, and load balancing method
JP6278320B2 (en) End-to-end data center performance control
US9571561B2 (en) System and method for dynamically expanding virtual cluster and recording medium on which program for executing the method is recorded
US11134390B2 (en) Spectrum sharing system for telecommunications network traffic
US20110161965A1 (en) Job allocation method and apparatus for a multi-core processor
US20130198758A1 (en) Task distribution method and apparatus for multi-core system
US10778807B2 (en) Scheduling cluster resources to a job based on its type, particular scheduling algorithm,and resource availability in a particular resource stability sub-levels
KR20200017589A (en) Cloud server for offloading task of mobile node and therefor method in wireless communication system
US10349409B2 (en) Method and system for transmission schedule instruction for allocating resources in an unlicensed spectrum
CN110838990A (en) Method and device for accelerating layer1 in C-RAN
US10733022B2 (en) Method of managing dedicated processing resources, server system and computer program product
US20120144389A1 (en) Optimizing virtual image deployment for hardware architecture and resources
JP2017126238A (en) System management device, information processing system, system management method, and program
Garikipati et al. Rt-opex: Flexible scheduling for cloud-ran processing
CN110139370B (en) Information indication method, communication device and communication system
CN114979286B (en) Access control method, device, equipment and computer storage medium for container service
Jiao et al. Radio hardware virtualization for coping with dynamic heterogeneous wireless environments
US11356347B2 (en) Method and apparatus for monitoring performance of virtualized network functions
Wang et al. Schedule distributed virtual machines in a service oriented environment
CN113626213A (en) Event processing method, device and equipment and computer readable storage medium
JP7107671B2 (en) Resource allocation device
Kim et al. An accelerated edge computing with a container and its orchestration
CN113032098A (en) Virtual machine scheduling method, device, equipment and readable storage medium
US20240073815A1 (en) Open RAN Radio Unit Power Saving on Per-Symbol Basis
JP2017142647A (en) Resource management device and resource management method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200225