WO2024017285A1 - Cpu核的分配方法、***、设备和存储介质 - Google Patents

Cpu核的分配方法、***、设备和存储介质 Download PDF

Info

Publication number
WO2024017285A1
WO2024017285A1 PCT/CN2023/108098 CN2023108098W WO2024017285A1 WO 2024017285 A1 WO2024017285 A1 WO 2024017285A1 CN 2023108098 W CN2023108098 W CN 2023108098W WO 2024017285 A1 WO2024017285 A1 WO 2024017285A1
Authority
WO
WIPO (PCT)
Prior art keywords
cpu core
target
cpu
unit
core
Prior art date
Application number
PCT/CN2023/108098
Other languages
English (en)
French (fr)
Inventor
娄方亮
王韬
李林
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2024017285A1 publication Critical patent/WO2024017285A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory

Definitions

  • This application relates to the field of communication technology, for example, to the allocation method, system, equipment and storage medium of a central processing unit (Central Processing Unit, CPU) core.
  • CPU Central Processing Unit
  • the User Plane Function As a data plane network element, assumes the data forwarding function and has extremely high requirements for performance.
  • the UPF device is designed to follow the design concept of software and hardware decoupling, and is implemented based on multi-core CPU's universal server and Data Plane Development Kit (DPDK) acceleration technology.
  • DPDK Data Plane Development Kit
  • This application provides CPU core allocation methods, systems, equipment and storage media, and solves the problem of wasting CPU computing power due to a single frequency and power consumption configuration between CPU cores under a multi-core CPU architecture.
  • This application provides a CPU core allocation method, which is applied to resource management components, including:
  • This application provides a CPU core allocation method, which is applied to network elements, including:
  • the present application provides a CPU core allocation system, including: a network element and a resource management component; wherein the network element establishes a communication connection with the resource management component; the resource management component is configured to acquire the network element
  • the CPU core frequency reconfiguration information of at least one business unit in the network is determined, the target CPU core is determined according to the CPU core frequency reconfiguration information, and the CPU core is allocated to the network element.
  • the present application provides a communication device, including: a memory, and one or more processors; the memory is configured to store one or more programs; when the one or more programs are processed by the one or more processors Execution causes the one or more processors to implement the above-mentioned CPU core allocation method.
  • the present application provides a storage medium that stores a computer program.
  • the computer program is executed by a processor, the above-mentioned CPU core allocation method is implemented.
  • Figure 1 is a structural block diagram of a network element provided by an embodiment of the present application.
  • Figure 2 is a hardware architecture diagram of a network element provided by an embodiment of the present application.
  • FIG. 3 is a flow chart of a CPU core allocation method provided by an embodiment of the present application.
  • FIG. 4 is a flow chart of another CPU core allocation method provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of allocation interaction of a CPU core provided by an embodiment of the present application.
  • FIG. 6 is a structural block diagram of a CPU core allocation system provided by an embodiment of the present application.
  • Figure 7 is a schematic diagram of the configuration relationship between the load level, functional units and business units of a CPU core provided by an embodiment of the present application;
  • Figure 8 is a schematic diagram of dynamic adjustment of CPU frequency by a CPU management technology provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of allocation of CPU cores provided by an embodiment of the present application.
  • FIG. 10 is a structural block diagram of a CPU core allocation device provided by an embodiment of the present application.
  • FIG. 11 is a structural block diagram of another CPU core allocation device provided by an embodiment of the present application.
  • Figure 12 is a schematic structural diagram of a communication device provided by an embodiment of the present application.
  • Network function virtualization requires communication equipment to be designed to separate software and hardware. Communication equipment hardware has transformed from complex dedicated hardware devices to universal servers with complete ecosystem and rich functions. Complex and ever-changing business requirements are realized through software codes, effectively solving the above-mentioned main contradictions.
  • UPF also follows the concept of network function virtualization in its design and implementation, using general-purpose servers as the underlying hardware, and based on virtualization technology, it abstracts the server's CPU, memory, hard disk, network card and other hardware resources into virtualized resources for allocation and use. .
  • Hardware resource virtualization and virtual resource cloudization serve as cloud system infrastructure to serve upper-layer applications.
  • the CPU design of general servers is limited by physical limitations, and the main frequency increase of a single CPU core is limited.
  • Multi-core architecture is usually used to achieve horizontal expansion of performance.
  • UPF adopts a core-bound design in its process scheduling strategy.
  • the CPU core affinity is set so that the thread runs on the designated CPU core for a long time.
  • UPF uses core isolation technology to isolate the core used by the business from the scheduling of the operating system, so that the core does not accept the scheduling of the operating system to prevent the operating system from migrating threads from the designated core.
  • UPF network elements are designed to consist of multiple microservices with different divisions of labor, and different microservices run in different virtual machines or containers. Due to different division of labor, microservices have different requirements for CPU computing power. This creates a conflict with the same CPU cores, and furthermore, when the cores of microservices that require higher computing power are running at nearly full capacity, some CPU cores still retain redundant computing power. Due to the setting of CPU core affinity and the core isolation strategy, UPF cannot dynamically match the CPU computing power with actual needs through the scheduling of the operating system. This phenomenon causes a waste of CPU computing power. Some highly loaded CPU cores reach the limit of computing power before other cores, becoming a bottleneck in improving the performance of the entire system.
  • SST Speed Select Technology
  • SST will be selected to finely control the operating status, frequency, power consumption, etc. of single and multiple processor cores, without changing the overall power consumption and computing capabilities of the CPU.
  • the computing power is more reasonably distributed among the cores to improve the forwarding capability of the UPF machine.
  • the embodiments of this application solve the problem of a single frequency and power consumption configuration between cores in a multi-core architecture, resulting in a waste of CPU computing power and failure to be fully utilized.
  • the embodiment of this application provides a high-performance UPF network element based on CPU management, which can be widely used in 5G core network architecture and serves as a user plane message forwarding unit to achieve high-speed forwarding of network messages.
  • network element refers to the data plane forwarding network element.
  • the network element may be UPF.
  • the network element includes multiple service units.
  • Figure 1 is a structural block diagram of a network element provided by an embodiment of the present application.
  • the network element can divide the service units according to different service functions.
  • UPF can include six business units, namely: Tulip Elastic Cloud System (TECS) Cloud Base (TECS Cloud Foundation, TCF), Operation and Maintenance Unit (Operation Maintain Unit, OMU), Interface Process Unit (IPU), General Service Unit (GSU), Central Data Unit (Central Data Unit, CDU) and Packet Forwarding Unit (Packet Forwarding Unit) , PFU).
  • TCF Tulip Elastic Cloud System
  • OMU Operation and Maintenance Unit
  • IPU Interface Process Unit
  • GSU General Service Unit
  • GSU General Service Unit
  • CPU Central Data Unit
  • CDU Central Data Unit
  • Packet Forwarding Unit Packet Forwarding Unit
  • the OMU is used to manage UPF and communicate with the Element Management System (EMS) and Virtualized Network Function Manager (VNFM); the IPU is used for signaling transmission processing and bill processing ; GSU is used for service access; PFU is used to communicate with other network elements except EMS/VNFM and forward user plane messages; CDU is used for data storage and backup.
  • EMS Element Management System
  • VNFM Virtualized Network Function Manager
  • the IPU is used for signaling transmission processing and bill processing
  • GSU is used for service access
  • PFU is used to communicate with other network elements except EMS/VNFM and forward user plane messages
  • CDU is used for data storage and backup.
  • FIG. 2 is a hardware architecture diagram of a network element provided by an embodiment of the present application.
  • the main task of UPF is packet forwarding, and its hardware consists of CPU, memory, and network card as core components.
  • the two CPUs use the Non-Uniform Memory Access (NUMA) architecture to jointly serve as the computing core of the entire device.
  • NUMA Non-Uniform Memory Access
  • Each CPU has its own independent memory and high-speed serial computer expansion bus standards such as Peripheral Component Interconnect Fast express, PCIe) slot.
  • PCIe Peripheral Component Interconnect Fast express
  • Both CPUs have a multi-core architecture and are composed of multiple physical cores.
  • the network card is mainly responsible for receiving and sending packets, and the processing of packets is handled by specific cores on the CPU specified through distribution rules.
  • the load on the CPU core responsible for user plane packet processing and forwarding will increase as the throughput of the UPF device increases.
  • the load of some CPU cores responsible for running the internal management process of UPF network elements and controlling packet forwarding processing is relatively stable, always at a low load level, and will not cause major fluctuations with changes in the overall throughput of the UPF device.
  • FIG. 3 is a flow chart of a CPU core allocation method provided by an embodiment of the present application. This embodiment is applied to the situation where the CPU core frequency is dynamically configured in a high-performance UPF of cloud system infrastructure. This embodiment can be executed by the resource management component. As shown in Figure 3, this embodiment includes: S310-S330.
  • the CPU core frequency reconfiguration information refers to information about reconfiguring the current operating frequency of the CPU core.
  • the CPU core frequency reconfiguration information includes one of the following: the target number of CPU cores corresponding to each business unit; the target load level of the CPU core corresponding to each business unit; The target frequency range to which the CPU core belongs; there is a one-to-one correspondence between the target load level and the target frequency range to which each CPU core belongs.
  • each business unit may correspond to one type of CPU core.
  • the corresponding CPU core frequency ranges and load levels between different business units may be different or the same.
  • the CPU core frequency reconfiguration information may carry configuration information of CPU core frequencies of multiple business units.
  • the CPU core frequency configuration information of multiple business units can be treated as a whole.
  • the CPU core frequency reconfiguration information may carry the CPU core frequency configuration information of a business unit, and each business unit corresponds to one CPU core frequency configuration information.
  • the business units in the network element can be divided into four different functional units according to the service functions corresponding to the different business units in the network element, and each functional unit corresponds to a service level.
  • a functional unit includes at least one business unit.
  • the service levels corresponding to the business units belonging to the same functional unit are the same.
  • the service levels corresponding to the business units not belonging to the same functional unit are different.
  • the resource management component may divide the service level into four levels, and configure all CPU cores in the CPU bus to the lowest service level by default.
  • the target number of CPU cores corresponding to each business unit refers to the number of CPU cores required by business units belonging to different functional units in the network element; the target load level of the CPU cores corresponding to each business unit refers to the number of CPU cores corresponding to each business unit.
  • the load level corresponding to the unit; the target frequency range to which the CPU core corresponding to each business unit belongs refers to the frequency range corresponding to each business unit.
  • each target load There is a one-to-one correspondence between levels and target frequency ranges.
  • the load levels are level0, level1, level2 and level3 respectively, and the frequency range corresponding to level0 is 1.4GHz-1.6GHz, the frequency range corresponding to level1 is 1.6GHz-2.0GHz, and the frequency range corresponding to level2 is 2.0 GHz-2.2GHz, and the corresponding frequency range of level3 is 2.3GHz-2.5GHz.
  • the resource management component may allocate CPU core resources to each service unit.
  • the resource management component obtains the CPU core frequency reconfiguration information of at least one business unit in the network element, and when allocating CPU core resources, it allocates the CPU core resources to each business unit based on the principle of on-demand allocation.
  • the corresponding target CPU core is mapped to the corresponding frequency range.
  • the target CPU core refers to the CPU core allocated by the resource management component to the business unit in the network element.
  • the resource management component searches for a matching CPU core according to the CPU core frequency reconfiguration information, and uses the matching CPU core as the target CPU core.
  • the resource management component configures one or more CPU cores for each business unit based on the principle of on-demand allocation.
  • the resource management component determines the target CPU core
  • the resource management component allocates the target CPU core to the network element, so that the service unit in the network element performs service processing through the target CPU core.
  • the technical solution of this embodiment obtains the CPU core frequency reconfiguration information of at least one business unit in the network element and dynamically adjusts the operating frequency of the CPU core according to the CPU core frequency reconfiguration information to obtain a target CPU adjusted to the target frequency range. core, and allocates target CPU cores to network elements, solving the technical problem of load imbalance between cores, achieving dynamic matching of the total computing power of the CPU core and the load of the business process, thereby improving the packet forwarding performance of the network element.
  • the method before allocating the target CPU core to the network element, the method further includes: transmitting the core index corresponding to the target CPU core and the CPU core frequency reconfiguration information to the CPU, so that the CPU reconfigurates the target CPU core according to the CPU core frequency information and The core index reconfigures the target CPU core; receives a successful transmission message fed back by the CPU.
  • the core index refers to the index number corresponding to each target CPU core.
  • each CPU core is configured with a unique core index.
  • the CPU core frequency reconfiguration information is sent to the resource management component, and the resource management component determines the target CPU core according to the CPU core frequency reconfiguration information, and uses the resource management component to determine the target CPU core.
  • Each business unit in Yuan runs on the CPU core according to the configured running frequency. Effect.
  • determining the target CPU core based on the CPU core frequency reconfiguration information includes: selecting a target number of candidate CPU cores based on the target load level or target frequency range; and determining the target CPU core based on the business status of the candidate CPU cores.
  • the candidate CPU core refers to the CPU core in the CPU bus that matches the target load level or target frequency range; the business status refers to the current running status of the candidate CPU core.
  • the service state may include: idle state and running state. Among them, the idle state refers to the state in which the CPU core is idle; the running state refers to the state in which the CPU core is running.
  • the resource management component selects a target number of candidate CPU cores based on the target load level or target frequency range, it needs to determine the current business status of each candidate CPU core, and determine the candidate CPU core based on the business status of the candidate CPU core. Whether the core can be used as the target CPU core.
  • selecting a target number of candidate CPU cores based on the target load level or target frequency range includes: determining a corresponding target service level based on the target load level or target frequency range; selecting a target number of candidate CPU cores based on the target service level. .
  • the target service level is used to characterize the priority of the CPU core in the resource management component.
  • the service levels are divided into service levels (CLass Of Service, CLOS) 0, CLOS1, CLOS2 and CLOS3, and the corresponding frequency range of CLOS3 is 1.4GHz-1.6GHz, and the corresponding frequency range of CLOS2 is 1.6GHz-2.0 GHz, the frequency range corresponding to CLOS1 is 2.0GHz-2.2GHz, and the frequency range corresponding to CLOS0 is 2.3GHz-2.5GHz.
  • the resource management component can determine the target service level according to the target load level or target frequency range corresponding to the business unit, and select the target frequency range from all CPU cores included in the CPU bus according to the target service level.
  • the target number of CPU cores is used as candidate CPU cores.
  • determining the target CPU core according to the business status of the candidate CPU core includes: responding to the business status of the candidate CPU core being idle, directly using the candidate CPU core as the target CPU core; responding to the business status of the candidate CPU core In the running state, release the allocated CPU core corresponding to the business unit, adjust the operating frequency of the allocated CPU core to the target frequency range, and use the allocated CPU core as the target CPU core.
  • the allocated CPU core refers to the CPU core that has been allocated to the business unit before the CPU core frequency corresponding to the business unit is reconfigured.
  • the resource management component can directly use the candidate CPU core as the target CPU core corresponding to the business unit.
  • the resource management component can adjust the operating frequency of the allocated CPU core of the business unit to the target frequency range, and adjust the operating frequency of the allocated CPU core to the target frequency range. as the target CPU core.
  • determining the target CPU core based on the CPU core frequency reconfiguration information includes: obtaining a target number of idle CPU cores; adjusting the current operating frequency of the idle CPU cores to the target frequency range; adjusting the idle CPU cores to the target frequency range.
  • the CPU core serves as the target CPU core.
  • the resource management component receives the CPU core frequency reconfiguration information, it selects a target number of idle CPU cores and determines the current operating frequency of each idle CPU core. If the current operating frequency is not within the target frequency range, then The resource management component may adjust the current operating frequency of the idle CPU core to the target frequency range, and use the idle CPU core adjusted to the target frequency range as the target CPU core.
  • the method before obtaining the CPU core frequency reconfiguration information of at least one service unit in the network element, the method further includes: sending a default load level and a default frequency range to the CPU, so that the CPU adjusts the CPU core frequency according to the default load level and default frequency range.
  • the CPU core configures; receives a successful configuration message fed back by the CPU.
  • the default load level and default frequency range refer to the load level and frequency range configured for each CPU core in the CPU bus before the resource management component configures the service software package for the network element.
  • the default frequency range is the smallest frequency range, and the service level corresponding to the default load level is CLOS3.
  • FIG. 4 is a flow chart of another CPU core allocation method provided by an embodiment of the present application.
  • This embodiment can be executed by a network element.
  • the network element may be UPF.
  • the CPU core allocation method in this embodiment includes: S410-S430.
  • S410 Determine the CPU core frequency reconfiguration information of at least one service unit in the network element according to the service characteristics of the service to be transmitted.
  • the service characteristics of the services to be transmitted refer to the attribute information of the services to be transmitted processed by each business unit.
  • the service characteristics of the service to be transmitted include: service load; and service type.
  • the CPU core frequency reconfiguration information includes one of the following: the target number of CPU cores corresponding to each business unit; the target load level of the CPU core corresponding to each business unit; The target frequency range to which the CPU core belongs; there is a one-to-one correspondence between the target load level and the target frequency range to which each CPU core belongs.
  • the CPU core frequency reconfiguration information is related to the service characteristics of the service unit and the service software package specifications used by the network element.
  • the load level or frequency range corresponding to each service unit and the target number of target CPU cores can be determined according to the service software package rules adopted by the network element. Since the network element includes multiple business units, when the business characteristics of the multiple business units change and the functional units corresponding to the business units are different, determine the target number of CPU cores corresponding to each business unit, and the number of CPU cores corresponding to the business units. The corresponding target load level or target frequency range generates corresponding CPU core frequency reconfiguration information.
  • the network element sends the CPU core frequency reconfiguration information to the resource management component, so that the resource management component dynamically adjusts the frequency of the CPU core according to the CPU core frequency reconfiguration information and obtains the corresponding target CPU.
  • the network element receives the target CPU core allocated by the resource management component to process and transmit the services to be transmitted through the CPU core.
  • FIG. 5 is a schematic diagram of allocation interaction of a CPU core provided by an embodiment of the present application. As shown in Figure 5, the CPU core allocation process in this embodiment includes:
  • Default configuration information includes: default load level and default frequency range.
  • SST allows the user to define the priority of each CPU core.
  • Speed Selection Technology-Core Power (SST-Core Power, SST-CP) serves as the interface for this function and defines a mechanism for allocating power between cores under power constraints.
  • the priority of the CPU core is delivered in the form of CLOS configuration.
  • Each CLOS defines the maximum frequency and minimum frequency allowed to be used, which determines how to limit the frequency and allocate power.
  • Each CPU core obtains its own priority by being associated with a CLOS.
  • the resource management component divides CLOS into four levels, and sets all CPU cores to the lowest CLOS level by default (for example, CLOS3, frequency range is 1.4GHz-1.6GHz).
  • the resource management component is responsible for allocating CPU core resources to each service unit (also called a service container).
  • the resource management component After using CPU management technology for performance enhancement, the resource management component will obtain the CPU core frequency level configuration of the business unit in the software package, and map the CPU core to a higher CLOS level based on the principle of on-demand allocation when allocating core resources. And set the corresponding frequency range.
  • the embodiment of this application is based on the strategy of dynamically adjusting the CPU core frequency and power consumption based on SST and CPU hierarchical management technology, and is integrated into the UPF network element to meet the adjustment of inter-core computing power distribution according to the actual load, and can be flexibly customized according to actual business needs. and changes, and by using an algorithm to dynamically configure the CPU core frequency, the CPU cores are classified and managed, and on the premise of generating inter-core process scheduling, the problem of load imbalance between cores is solved by dynamically adjusting the running frequency of the CPU cores. question.
  • FIG. 6 is a structural block diagram of a CPU core allocation system provided by an embodiment of the present application.
  • the CPU core allocation system in this embodiment includes: a network element 610 and a resource management component 620; wherein the network element 610 establishes a communication connection with the resource management component 620; the resource management component 620 obtains at least one business unit in the network element 610 In the case of CPU core frequency reconfiguration information, the resource management component 620 determines the target CPU core according to the CPU core frequency reconfiguration information, and allocates the CPU core to the network element 610.
  • the technical solution of this embodiment obtains the CPU core frequency reconfiguration information of at least one business unit in the network element and dynamically adjusts the operating frequency of the CPU core according to the CPU core frequency reconfiguration information to obtain a target CPU adjusted to the target frequency range. core, and allocates target CPU cores to network elements, solving the technical problem of load imbalance between cores, achieving dynamic matching of the total computing power of the CPU core and the load of the business process, thereby improving the packet forwarding performance of the network element.
  • the CPU core allocation system further includes: a processor; the processor establishes a communication connection with the resource management component; the resource management component transmits the core index corresponding to the target CPU core and the CPU core reconfiguration information to the processor, This enables the CPU to reconfigure the target CPU core according to the CPU core frequency reconfiguration information and core index.
  • the core index corresponding to the target CPU core and the CPU core reconfiguration information are transmitted to the processor, so that the CPU compares the target CPU to the target CPU according to the CPU core frequency reconfiguration information and the core index.
  • the core is reconfigured.
  • the network element includes: a cluster scheduling unit, a network element management unit, a control signaling processing unit and a data forwarding unit; wherein the cluster scheduling unit, network element management unit, control signaling processing unit and data forwarding unit are The corresponding load levels and frequency ranges are different, and cluster scheduling The total computing power of the CPU core corresponding to the unit, network element management unit, control signaling processing unit and data forwarding unit is a fixed value.
  • the cluster scheduling unit includes: a cloud base; the network element management unit includes: an operation and maintenance unit; the control signaling processing unit includes: an interface processing unit, a general business unit and a central data unit; a data forwarding unit, Including: message forwarding unit.
  • the total computing power of the CPU cores corresponding to the cluster scheduling unit, network element management unit, control signaling processing unit and data forwarding unit is a fixed value.
  • the operating frequency of one business unit is increased, the operating frequency of other business units is increased.
  • the operating frequency needs to be reduced to ensure that the total computing power of the CPU core remains at a constant value.
  • the operating frequencies of the CPU cores are configured differently according to the types of running functional units to achieve dynamic allocation of computing power among the CPU cores.
  • FIG. 7 is a schematic diagram of the configuration relationship between the load level, functional units and business units of a CPU core provided by an embodiment of the present application.
  • the network element includes six business units, namely: TCF, OMU, IPU, GSU, CDU and PFU; and the functional units include: cluster scheduling unit, network element management unit, control signaling processing unit and Data forwarding unit.
  • the cluster scheduling unit includes: TCF;
  • the network element management unit includes: OMU;
  • the control signaling processing unit includes: IPU, GSU and CDU;
  • the data forwarding unit includes: PFU.
  • the load levels corresponding to the cluster scheduling unit, network element management unit, control signaling processing unit and data forwarding unit are level0, level1, level2 and level3 respectively.
  • FIG. 8 is a schematic diagram of dynamic adjustment of CPU frequency by a CPU management technology provided by an embodiment of the present application.
  • the running frequency of each CPU core i.e. core 0, core 1...core n
  • it can Dynamically adjust the operating frequency of each CPU core. For example, while the operating frequency of core 0 is adjusted from 2.2GHz to 1.5GHz, the operating frequency of core 1 is adjusted from 2.2GHz to 2.4GHz.
  • this embodiment adds the function of dynamically configuring the CPU core frequency of the UPF to realize the dynamic allocation of frequency and power consumption between CPU cores and improve the utilization efficiency of the CPU computing power. Improving the forwarding performance of UPF can achieve higher throughput performance with the same hardware configuration, effectively reducing user costs under the same throughput.
  • FIG. 9 is a schematic diagram of allocation of CPU cores provided by an embodiment of the present application.
  • each business unit in UPF corresponds to
  • the load level and frequency range of each business unit are sent to the resource management component, so that the resource management component can dynamically adjust the load level and frequency range of each business unit according to the load level (also called frequency level) and frequency range of each business unit. Allocate target CPU cores.
  • the target CPU cores corresponding to TCF are core0-core3; the target CPU cores corresponding to OMU are core4-core7; the target CPU cores corresponding to IPU are core8-core11; the target CPU cores corresponding to GSU are core12-core15; and the target CPU cores corresponding to CDU are The target CPU cores are core16-core19; the target CPU cores corresponding to PFU are core20-core43.
  • the resource management component determines the target CPU core corresponding to each business unit, it allocates the target CPU core corresponding to each business unit to the corresponding business unit, and sends the operating frequency of each CPU core to the CPU bus so that the CPU The bus adjusts the operating frequency of the target CPU core to the corresponding frequency range.
  • FIG. 10 is a structural block diagram of a CPU core allocation device provided by an embodiment of the present application. This embodiment is applied to resource management components. As shown in Figure 10, the CPU core allocation device in this embodiment includes: an acquisition module 1010, a determination module 1020, and an allocation module 1030.
  • the acquisition module 1010 is configured to acquire the CPU core frequency reconfiguration information of at least one business unit in the network element; the determination module 1020 is configured to determine the target CPU core according to the CPU core frequency reconfiguration information; the allocation module 1030 is configured to assign the target CPU cores are allocated to network elements.
  • the CPU core allocation device before allocating the target CPU core to the network element, the CPU core allocation device further includes:
  • the transmission module is configured to transmit the core index and CPU core frequency reconfiguration information corresponding to the target CPU core to the CPU, so that the CPU reconfigures the target CPU core according to the CPU core frequency reconfiguration information and the core index; the first receiving module, Configured to receive CPU feedback on successful transfer messages.
  • the CPU core frequency reconfiguration information includes one of the following: the target number of CPU cores corresponding to each business unit; the target load level of the CPU core corresponding to each business unit; The target frequency range to which the CPU core belongs; there is a one-to-one correspondence between the target load level and the target frequency range to which each CPU core belongs.
  • the determining module 1020 includes:
  • the selection unit is configured to select a target number of candidate CPU cores based on the target load level or target frequency range; the first determination unit is configured to determine the target CPU core based on the business status of the candidate CPU cores.
  • the determining module 1020 includes:
  • the acquisition unit is configured to acquire a target number of idle CPU cores; the adjustment unit is configured to adjust the current operating frequency of the idle CPU cores to the target frequency range; the second determination unit is configured to adjust the idle CPU cores adjusted to the target frequency range as Target CPU core.
  • the selection unit includes:
  • the first determination subunit is configured to determine the corresponding target service level according to the target load level or the target frequency range; the selection subunit is configured to select a target number of candidate CPU cores according to the target service level.
  • the determining unit includes:
  • the second determination subunit is configured to respond to the business status of the candidate CPU core being the idle state, and directly use the candidate CPU core as the target CPU core;
  • the third determination subunit is configured to respond to the business status of the candidate CPU core being the running state, Release the allocated CPU core corresponding to the business unit, adjust the operating frequency of the allocated CPU core to the target frequency range, and use the allocated CPU core as the target CPU core.
  • the CPU core allocation device before obtaining the CPU core frequency reconfiguration information of at least one service unit in the network element, the CPU core allocation device further includes:
  • the sending module is configured to send the default load level and default frequency range to the CPU so that the CPU configures the CPU core according to the default load level and default frequency range;
  • the second receiving module is configured to receive a successful configuration message fed back by the CPU.
  • the CPU core allocation device provided in this embodiment is configured to implement the CPU core allocation method applied to the resource management component in the embodiment shown in Figure 3.
  • the implementation principles and technical effects of the CPU core allocation device provided in this embodiment are similar. No further details will be given.
  • FIG. 11 is a structural block diagram of another CPU core allocation device provided by an embodiment of the present application. This embodiment is applied to network elements. As shown in Figure 11, the CPU core allocation device in this embodiment includes: a determination module 1110, a sending module 1120, and a receiving module 1130.
  • the determination module 1110 is configured to determine the CPU core frequency reconfiguration information of at least one service unit in the network element according to the service characteristics of the service to be transmitted; the sending module 1120 is configured to send the CPU core frequency reconfiguration information to the resource management component; the receiving module 1130.
  • the target CPU core configured to receive allocation from the resource management component.
  • the CPU core frequency reconfiguration information includes one of the following: the target number of CPU cores corresponding to each business unit; the target load level of the CPU core corresponding to each business unit; The target frequency range to which the CPU core belongs; there is a one-to-one correspondence between the target load level and the target frequency range to which each CPU core belongs.
  • the CPU core allocation device provided by this embodiment is configured to implement the CPU core allocation method applied to network elements in the embodiment shown in Figure 4.
  • the implementation principles and technical effects of the CPU core allocation device provided by this embodiment are similar. Here No longer.
  • FIG. 12 is a schematic structural diagram of a communication device provided by an embodiment of the present application.
  • the device provided by this application includes: a processor 1210 and a memory 1220.
  • the number of processors 1210 in the device may be one or more.
  • one processor 1210 is taken as an example.
  • the number of memories 1220 in the device may be one or more.
  • one memory 1220 is taken as an example.
  • the processor 1210 and the memory 1220 of the device can be connected through a bus or other means. In Figure 12, the connection through the bus is taken as an example.
  • the device may be a resource management component.
  • the memory 1220 can be configured to store software programs, computer-executable programs and modules, such as program instructions/modules corresponding to the equipment of any embodiment of the present application (for example, acquisition in the allocation device of the CPU core). module 1010, determination module 1020 and allocation module 1030).
  • the memory 1220 may include a stored program area and a stored data area, where the stored program area may store an operating system and an application program required for at least one function; the stored data area may store data created according to use of the device, and the like.
  • the memory 1220 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device.
  • memory 1220 may include memory located remotely from processor 1210, and these remote memories may be connected to the device through a network.
  • networks include but are not limited to the Internet, intranets, local area networks, mobile communication networks and combinations thereof.
  • the communication device is a resource management component
  • the device provided above can be configured to execute the CPU core allocation method provided by any of the above embodiments and is applied to the resource management component, and has corresponding functions and effects.
  • the device provided above can be configured to execute the allocation method for CPU cores of the network element provided in any of the above embodiments, and has corresponding functions and effects.
  • Embodiments of the present application also provide a storage medium containing computer-executable instructions.
  • the computer-executable instructions When executed by a computer processor, the computer-executable instructions are used to execute a method for allocating CPU cores applied to a resource management component.
  • the method includes: obtaining CPU core frequency reconfiguration information of at least one business unit in the network element; determine the target CPU core based on the CPU core frequency reconfiguration information; allocate the target CPU core to the network element.
  • Embodiments of the present application also provide a storage medium containing computer-executable instructions.
  • the computer-executable instructions When executed by a computer processor, the computer-executable instructions are used to execute a method for allocating CPU cores applied to network elements.
  • the method includes: The business characteristics of the transmission service determine the CPU core frequency reconfiguration information of at least one service unit in the network element; send the CPU core frequency reconfiguration information to the resource management component; and receive the target CPU core allocated by the resource management component.
  • user equipment encompasses any suitable type of wireless user equipment, such as a mobile phone, a portable data processing device, a portable web browser or a vehicle-mounted mobile station.
  • the various embodiments of the present application may be implemented in hardware or special purpose circuitry, software, logic, or any combination thereof.
  • some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software that may be executed by a controller, microprocessor, or other computing device, although the application is not limited thereto.
  • Embodiments of the present application may be implemented by a data processor of the mobile device executing computer program instructions, for example in a processor entity, or by hardware, or by a combination of software and hardware.
  • Computer program instructions may be assembly instructions, Instruction Set Architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or written in any combination of one or more programming languages source code or object code.
  • ISA Instruction Set Architecture
  • Any block diagram of a logic flow in the figures of this application may represent program operations, or may represent interconnected logic circuits, modules, and functions, or may represent a combination of program operations and logic circuits, modules, and functions.
  • Computer programs can be stored on memory.
  • the memory may be of any type suitable for the local technical environment and may be implemented using any suitable data storage technology, such as but not limited to Read-Only Memory (ROM), Random Access Memory (RAM), optical Storage devices and systems (Digital Video Disc (DVD) or Compact Disk (CD)), etc.
  • Computer-readable media may include non-transitory storage media.
  • the data processor can be any type suitable for the local technical environment, such as but not limited to general-purpose computers, special-purpose computers, microprocessors, digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC ), programmable logic devices (Field-Programmable Gate Array, FPGA) and processors based on multi-core processor architecture.
  • DSP Digital Signal Processing
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Multi Processors (AREA)

Abstract

CPU核的分配方法、***、设备和存储介质。应用于资源管理组件的CPU核的分配方法包括:获取网元中至少一个业务单元的CPU核频率重配置信息(S310);根据所述CPU核频率重配置信息确定目标CPU核(S320);将所述目标CPU核分配至所述网元(S330)。

Description

CPU核的分配方法、***、设备和存储介质 技术领域
本申请涉及通信技术领域,例如涉及中央处理器(Central Processing Unit,CPU)核的分配方法、***、设备和存储介质。
背景技术
随着移动通信技术的发展,新的网络业务对通信网络提出了更高的要求,并推动了第五代移动通信技术(5th-Generation,5G)技术的发展。在5G核心网的网络架构中,用户面功能(User Plane Function,UPF)作为数据面网元承担数据转发功能,对性能有着极高的要求。UPF装置在设计上遵循软硬件解耦的设计理念,基于多核CPU的通用服务器及数据面开发套件(Data Plane Development Kit,DPDK)加速技术实现。多核CPU架构配合DPDK加速技术的使用在提升性能的同时,限制了业务负载在核间的灵活调度。
发明内容
本申请提供CPU核的分配方法、***、设备和存储介质,解决了多核CPU架构下,CPU核之间频率和功耗配置单一导致CPU运算能力浪费的问题。
本申请提供一种CPU核的分配方法,应用于资源管理组件,包括:
获取网元中至少一个业务单元的CPU核频率重配置信息;根据所述CPU核频率重配置信息确定目标CPU核;将所述目标CPU核分配至所述网元。
本申请提供一种CPU核的分配方法,应用于网元,包括:
根据待传输业务的业务特征确定网元中至少一个业务单元的CPU核频率重配置信息;将所述CPU核频率重配置信息发送至资源管理组件;接收所述资源管理组件分配的目标CPU核。
本申请提供一种CPU核的分配***,包括:网元和资源管理组件;其中,所述网元与所述资源管理组件建立通信连接;所述资源管理组件配置为在获取到所述网元中至少一个业务单元的CPU核频率重配置信息的情况下,根据所述CPU核频率重配置信息确定目标CPU核,并将所述CPU核分配至所述网元。
本申请提供一种通信设备,包括:存储器,以及一个或多个处理器;所述存储器,配置为存储一个或多个程序;当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现上述的CPU核的分配方法。
本申请提供一种存储介质,所述存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述的CPU核的分配方法。
附图说明
图1是本申请实施例提供的一种网元的结构框图;
图2是本申请实施例提供的一种网元的硬件架构图;
图3是本申请实施例提供的一种CPU核的分配方法的流程图;
图4是本申请实施例提供的另一种CPU核的分配方法的流程图;
图5是本申请实施例提供的一种CPU核的分配交互示意图;
图6是本申请实施例提供的一种CPU核的分配***的结构框图;
图7是本申请实施例提供的一种CPU核的负载等级、功能单元和业务单元之间的配置关系示意图;
图8是本申请实施例提供的一种CPU管理技术对CPU频率动态调节的示意图;
图9是本申请实施例提供的一种CPU核的分配示意图;
图10是本申请实施例提供的一种CPU核的分配装置的结构框图;
图11是本申请实施例提供的另一种CPU核的分配装置的结构框图;
图12是本申请实施例提供的一种通信设备的结构示意图。
具体实施方式
下文中将结合附图对本申请的实施例进行说明。以下结合实施例附图对本申请进行描述,所举实例仅用于解释本申请。
随着移动通信技术的发展,新的网络业务及应用不断涌现,除了最基础的语音及文字数据传输外,高清视频、自动驾驶、智慧城市以及数量越来越多的智能穿戴设备,都对通信网络提出了更高的要求,并推动了5G技术的发展。5G网络要求更高的传输速率、高可靠的网络与更低的时延,以及对更广泛大规模连接的支持能力。随着通信***设计的发展,控制面网元和数据面网元逐步分离解耦。在5G核心网的网络架构中,UPF作为数据面网元承担数据转发功能,对性能有着极高的要求。
传统的通信设备是由通信设备厂商设计并生产制造的专用硬件。专用的硬件设备在设计上可以充分考虑实际应用场景中的条件限制及需求,可以从最底 端做到对设备的优化,有效地实现设备的最优性能。专用网络处理器在具备种种优点的同时,由于采用较多专用技术,导致专用网络处理器的研发过程中存在技术人员少、生态不完善和研发周期长的问题。另外,基于定制化硬件的设计也导致专用网络处理器的设计、制造复杂,不便于升级、维护和更新换代。
随着网络应用的逐渐丰富,种类繁多的功能需求对网元的更新迭代也有了更高频度的要求。基于专用硬件设计的网络处理器天然具备的长研发周期、高迭代成本的特点,与网络功能扩展、产品迭代频繁之间的矛盾,便逐渐演变成为网络***设计中的主要矛盾。网络功能虚拟化(Network Functions Virtualization,NFV)的设计理念应运而出。网络功能虚拟化要求通信设备在设计上实现软件与硬件的分离。通信设备硬件从复杂的专用硬件设备转变为生态完善、功能丰富的通用服务器。复杂而多变的业务需求经由软件代码来实现,有效解决了上述主要矛盾。
UPF在设计与实现上同样遵循了网络功能虚拟化的理念,采用通用服务器作为底层硬件,并基于虚拟化技术将服务器的CPU、内存、硬盘、网卡等硬件资源抽象成为虚拟化资源进行分配及使用。硬件资源虚拟化,虚拟资源云化,作为云***基础设施服务于上层应用。通用服务器的CPU设计受限于物理限制,单颗CPU核的主频提升有限,通常采用多核架构的方式来实现性能的水平扩展。多核架构引入的一些新问题就是需要保证多核间的缓存(cache)一致性以及需要对进程在多个核间进行调度。进程频繁地在核间进行调度会导致不同核的cache反复刷新,增加cache与内存之间数据交换的频率,进而影响服务器的整体性能。正因如此,UPF在进程调度的策略上采用了绑核的设计。对于承担主要分发任务的分发线程,以及处理任务的工作线程,都通过设置CPU核亲和性的方式,使线程长时间运行在指定的CPU核上。同时,UPF通过核隔离的技术,将业务所用核从操作***的调度中隔离出来,使核不接受操作***的调度,以避免操作***将线程从指定核上迁移。
在上述多核CPU架构的默认情况下,从操作***中隔离出来被UPF业务所使用的所有核都是等价的,具有相同的频率及功耗。但UPF网元在设计上由多个分工不同的微服务组成,不同的微服务运行在不同的虚机或容器中。微服务由于分工不同,对CPU运算能力的需求不同。这与相同的CPU核形成了矛盾,进而出现了当对运算能力需求较高的微服务所在核已经接近满负荷运转时,仍存在部分CPU核保留有冗余的运算能力。由于CPU核亲和性的设置以及核隔离的策略,使得UPF无法通过操作***的调度使CPU算力与实际需求实现动态匹配。这种现象造成了CPU运算能力的浪费。部分高负载的CPU核先于其他核达到运算能力的极限,成为整个***性能提升的瓶颈。
随着CPU多核架构的广泛应用,多核架构下的性能优化技术的发展日新月异。速度选择技术(Speed Select Technology,SST)通过增加对单颗CPU核的频率配置,支持了对CPU核间频率的不均匀配置。UPF网元可以利用该技术,提升CPU核频率与实际业务需求的匹配程度,进而提升UPF的整体性能。
在本申请实施例中,将选取SST,通过对单个及多个处理器核的运行状态、频率、功耗等进行精细化控制,在不改变CPU整体功耗及运算能力的情况下,通过CPU算力在核间更合理的分配,实现UPF整机转发能力的提升。本申请实施例解决了多核架构下,核间频率及功耗配置单一导致CPU运算能力浪费,未能得到充分利用的问题。
本申请实施例提供一种基于CPU管理的高性能的UPF网元,可广泛应用于5G核心网架构,作为用户面报文转发单元,实现对网络报文的高速转发。
为了便于对本申请实施例的方案进行解释说明。首先对网元的结构进行描述。其中,网元指的是数据面转发网元。示例性地,网元可以为UPF。在实施例中,网元包括多个业务单元。图1是本申请实施例提供的一种网元的结构框图。在实施例中,网元可以根据不同的业务功能,对业务单元进行划分。如图1所示,在网元为UPF的情况下,UPF可以包括六个业务单元,分别为:郁金香弹性云***(Tulip Elastic Cloud System,TECS)云底座(TECS Cloud Foundation,TCF)、操作维护单元(Operation Maintain Unit,OMU)、接口处理单元(Interface Process Unit,IPU)、通用业务单元(General Service Unit,GSU)、中央数据单元(Central Data Unit,CDU)和报文转发单元(Packet Forwarding Unit,PFU)。其中,OMU用于对UPF进行管理,以及与网元管理***(Element Management System,EMS)和虚拟化网络功能管理(Virtualized Network Function Manager,VNFM)通信;IPU用于信令传输处理和话单处理;GSU用于业务接入;PFU用于与除EMS/VNFM之外的其他网元进行通信、进行用户面报文转发;CDU用于数据存储及备份。虽然以上多个业务单元共同组成了UPF的整体架构,但决定PFU性能的用户面报文转发任务实际上仅由PFU进行处理。在真实的UPF运行环境中,也是PFU单元所占用的CPU核最先触及性能极限。因此,按照业务单元对性能的需求不同,为其设置不同的CPU核运行频率,进行CPU分类管理。
在一实施例中,图2是本申请实施例提供的一种网元的硬件架构图。如图2所示,UPF主要任务为报文转发,在硬件组成上由CPU、内存、网卡作为核心组件。两枚CPU采用非统一内存访问(Non Uniform Memory Access,NUMA)架构共同作为整台设备的运算核心。每一枚CPU具有各自独立的内存和高速串行计算机扩展总线标准如***组件互连快速(Peripheral Component Interconnect  express,PCIe)插槽。两枚CPU均为多核架构,由多颗物理核组成。UPF装置在转发报文时,网卡主要负责报文的接收和发送,报文的处理则由通过分发规则指定的CPU上的特定核处理。负责用户面报文处理及转发的CPU核负载会随着UPF装置的吞吐量增加而升高。负责运行UPF网元内部管理进程,以及控制报文转发处理的部分CPU核负载则相对稳定,始终处在较低的负载水平,不会随UPF装置的整机吞吐量变化产生较大波动。
本申请实施例中涉及到的频率等级、服务级别和负载等级可以理解为同一个概念,只是在不同的组件或网元中,所对应的描述有所区别而已。
在一实施例中,图3是本申请实施例提供的一种CPU核的分配方法的流程图。本实施例应用于云***基础设施的高性能UPF中对CPU核频率进行动态配置的情况。本实施例可以由资源管理组件执行。如图3所示,本实施例包括:S310-S330。
S310、获取网元中至少一个业务单元的CPU核频率重配置信息。
CPU核频率重配置信息指的是对CPU核的当前运行频率进行重新配置的信息。在一实施例中,CPU核频率重配置信息包括下述之一:每个业务单元所对应CPU核的目标数量;每个业务单元所对应CPU核所属的目标负载等级;每个业务单元所对应CPU核所属的目标频率范围;其中,每个CPU核所属的目标负载等级与目标频率范围之间是一一对应的。在实施例中,每个业务单元可以对应一种类型的CPU核。不同业务单元之间对应的CPU核频率范围和负载等级可以是不相同的,也可以是相同的。在一示例中,CPU核频率重配置信息中可以携带多个业务单元的CPU核频率的配置信息。可以将多个业务单元的CPU核频率配置信息作为一个整体。在一示例中,CPU核频率重配置信息中可以携带一个业务单元的CPU核频率的配置信息,每个业务单元对应一个CPU核频率配置信息。
在一实施例中,可以按照网元中不同业务单元对应的业务功能,将网元中的业务单元划分为4个不同的功能单元,并且,每个功能单元对应一个服务级别。其中,一个功能单元至少包括一个业务单元。属于同一个功能单元的业务单元所对应的服务级别是相同的,相应地,不属于同一个功能单元的业务单元所对应的服务级别是不相同的。在实施例中,资源管理组件可以将服务级别划分为4个等级,并且,将CPU总线中的所有CPU核默认配置为最低的服务级别。每个业务单元所对应CPU核的目标数量指的是网元中属于不同功能单元的业务单元所需要的CPU核数量;每个业务单元所对应CPU核所属的目标负载等级指的是每个业务单元所对应的负载等级;每个业务单元所对应CPU核所属的目标频率范围指的是每个业务单元所对应的频率范围。在实施例中,每个目标负载 等级和目标频率范围之间是一一对应关系。示例性地,假设负载等级分别为level0、level1、level2和level3,并且,level0对应的频率范围为1.4GHz-1.6GHz,level1对应的频率范围为1.6GHz-2.0GHz,level2对应的频率范围为2.0GHz-2.2GHz,以及level3对应的频率范围为2.3GHz-2.5GHz。
在实施例中,在网元中的至少一个业务单元实例化的过程中,资源管理组件可以为每个业务单元分配CPU核资源。在采用CPU管理技术进行性能增强之后,资源管理组件获取网元中至少一个业务单元的CPU核频率重配置信息,并在分配CPU核资源时,依据按需分配的原则,将每个业务单元所对应的目标CPU核映射到对应的频率范围。
S320、根据CPU核频率重配置信息确定目标CPU核。
目标CPU核指的是资源管理组件为网元中的业务单元所分配的CPU核。在实施例中,资源管理组件根据CPU核频率重配置信息查找相匹配的CPU核,并将相匹配的CPU核作为目标CPU核。在实施例中,资源管理组件依据按需分配的原则,为每个业务单元配置一个或多个CPU核。
S330、将目标CPU核分配至网元。
在实施例中,在资源管理组件确定目标CPU核之后,资源管理组件将目标CPU核分配至网元,以使网元中的该业务单元通过目标CPU核进行业务处理。
本实施例的技术方案,通过获取网元中至少一个业务单元的CPU核频率重配置信息,并根据CPU核频率重配置信息动态调节CPU核的运行频率,以得到调整至目标频率范围的目标CPU核,并将目标CPU核分配至网元,解决了核间负载不均衡的技术问题,实现了CPU核总算力与业务进程负载的动态匹配,进而提升了网元的报文转发性能。
在一实施例中,在将目标CPU核分配至网元之前,还包括:将目标CPU核对应的核索引和CPU核频率重配置信息传输至CPU,以使CPU按照CPU核频率重配置信息和核索引对目标CPU核进行重配置;接收CPU反馈的成功传输消息。其中,核索引指的是每个目标CPU核所对应的索引号。在实施例中,为了便于对CPU核的管理,每个CPU核配置一个唯一的核索引。在实施例中,在网元实例化部署的过程中,将CPU核频率重配置信息发送至资源管理组件,并由资源管理组件根据CPU核频率重配置信息确定目标CPU核,并通过资源管理组件将CPU核频率重配置信息和目标CPU核对应的核索引下发至CPU总线,以使CPU总线对目标CPU核的运行频率进行配置,以完成网元和CPU之间的交互,有效保证了网元中每个业务单元按照所配置的运行频率在CPU核上运行的 效果。
在一实施例中,根据CPU核频率重配置信息确定目标CPU核,包括:根据目标负载等级或目标频率范围选取目标数量的候选CPU核;根据候选CPU核的业务状态确定目标CPU核。
候选CPU核指的是CPU总线中与目标负载等级或目标频率范围相匹配的CPU核;业务状态指的是候选CPU核当前的运行状态。在实施例中,业务状态可以包括:空闲态和运行态。其中,空闲态指的是CPU核正处于空闲的状态;运行态指的是CPU核正处在运行中的状态。在实施例中,资源管理组件根据目标负载等级或目标频率范围选取出目标数量的候选CPU核之后,需要确定每个候选CPU核当前的业务状态,并根据候选CPU核的业务状态确定该候选CPU核是否可以作为目标CPU核。
在一实施例中,根据目标负载等级或目标频率范围选取目标数量的候选CPU核,包括:根据目标负载等级或目标频率范围确定对应的目标服务级别;根据目标服务级别选取目标数量的候选CPU核。
目标服务级别用于表征CPU核在资源管理组件中的优先级。在实施例中,目标服务级别所对应的级别值越大,对应的频率范围越小。示例性地,假设服务级别分为服务级别(CLass Of Service,CLOS)0、CLOS1、CLOS2和CLOS3,并且,CLOS3对应的频率范围为1.4GHz-1.6GHz,CLOS2对应的频率范围为1.6GHz-2.0GHz,CLOS1对应的频率范围为2.0GHz-2.2GHz,CLOS0对应的频率范围为2.3GHz-2.5GHz。在实施例中,资源管理组件可以根据业务单元所对应的目标负载等级或目标频率范围确定所属的目标服务级别,并根据目标服务级别从CPU总线包含的所有CPU核中选取与目标频率范围相匹配的目标数量的CPU核,作为候选CPU核。
在一实施例中,根据候选CPU核的业务状态确定目标CPU核,包括:响应于候选CPU核的业务状态为空闲态,将候选CPU核直接作为目标CPU核;响应于候选CPU核的业务状态为运行态,释放业务单元所对应已分配CPU核,并将已分配CPU核的运行频率调整至目标频率范围,以及将已分配CPU核作为目标CPU核。其中,已分配CPU核指的是在对业务单元所对应的CPU核频率进行重配置之前,对该业务单元已分配的CPU核。在一示例中,在候选CPU核处于空闲态的情况下,资源管理组件可以直接将候选CPU核作为该业务单元对应的目标CPU核。在一示例中,在候选CPU核处于运行态的情况下,资源管理组件可以将该业务单元已分配的CPU核的运行频率调整至目标频率范围,并将调整至目标频率范围的已分配CPU核作为目标CPU核。
在一实施例中,根据CPU核频率重配置信息确定目标CPU核,包括:获取目标数量的空闲CPU核;将空闲CPU核的当前运行频率调整至目标频率范围;将调整至目标频率范围的空闲CPU核作为目标CPU核。在实施例中,在资源管理组件接收到CPU核频率重配置信息之后,选取目标数量的空闲CPU核,并确定每个空闲CPU核的当前运行频率,若当前运行频率未处于目标频率范围,则资源管理组件可以将该空闲CPU核的当前运行频率调整至目标频率范围,以及将调整至目标频率范围的空闲CPU核作为目标CPU核。
在一实施例中,在获取网元中至少一个业务单元的CPU核频率重配置信息之前,还包括:向CPU发送默认负载等级和默认频率范围,以使CPU按照默认负载等级和默认频率范围对CPU核进行配置;接收CPU反馈的成功配置消息。在实施例中,默认负载等级和默认频率范围指的是在资源管理组件未对网元配置业务软件包之前,CPU总线中每个CPU核所配置的负载等级和频率范围。在实施例中,为了降低CPU总线的开销,默认频率范围为最小的频率范围,并且,默认负载等级所对应的服务级别为CLOS3。
在一实施例中,图4是本申请实施例提供的另一种CPU核的分配方法的流程图。本实施例可以由网元执行。示例性地,网元可以为UPF。如图4所示,本实施例中的CPU核的分配方法包括:S410-S430。
S410、根据待传输业务的业务特征确定网元中至少一个业务单元的CPU核频率重配置信息。
待传输业务的业务特征指的是每个业务单元所处理的待传输业务的属性信息。在实施例中,待传输业务的业务特征包括:业务负载量;以及业务类型。
在一实施例中,CPU核频率重配置信息包括下述之一:每个业务单元所对应CPU核的目标数量;每个业务单元所对应CPU核所属的目标负载等级;每个业务单元所对应CPU核所属的目标频率范围;其中,每个CPU核所属的目标负载等级与目标频率范围之间是一一对应的。
在实施例中,CPU核频率重配置信息与业务单元的业务特征,以及网元所采用的业务软件包规格有关。在实施例中,可以根据网元所采用的业务软件包规则确定每个业务单元所对应的负载等级或频率范围,以及目标CPU核的目标数量。由于网元包括多个业务单元,在多个业务单元的业务特征发生变化,且业务单元所对应的功能单元不同的情况下,确定每个业务单元所对应的CPU核的目标数量,以及CPU核所属的目标负载等级或目标频率范围,生成对应的CPU核频率重配置信息。
S420、将CPU核频率重配置信息发送至资源管理组件。
在实施例中,网元将CPU核频率重配置信息发送至资源管理组件,以使资源管理组件根据CPU核频率重配置信息对CPU核的频率进行动态调整,并得到对应的目标CPU。
S430、接收资源管理组件分配的目标CPU核。
在实施例中,网元接收资源管理组件分配的目标CPU核,以通过CPU核处理和传输待传输业务。
在一实施例中,图5是本申请实施例提供的一种CPU核的分配交互示意图。如图5所示,本实施例中的CPU核的分配过程包括:
S510、向CPU下发默认配置信息。
默认配置信息包括:默认负载等级和默认频率范围。
S520、接收CPU反馈的成功配置消息。
S530、接收UPF发送的CPU核频率重配置信息。
S540、根据CPU核频率重配置信息获取空闲的目标CPU核。
S550、向CPU下发业务配置信息。
S560、接收CPU反馈的成功传输消息。
S570、向UPF分配目标CPU核。
在实施例中,SST允许用户定义每个CPU核的优先级。速度选择技术-核功率(SST-Core Power,SST-CP)作为该功能的接口,定义了在功率受限的情况下在内核之间分配功率的机制。CPU核的优先级以CLOS配置的形式进行下发。每个CLOS中定义了允许使用的最大频率以及最小频率,进而决定如何限制频率和分配功率。每个CPU核通过关联到一个CLOS上获取自身的优先级。
在实施例中,资源管理组件将CLOS划分为4个等级,将所有的CPU核默认置为最低的CLOS等级(比如,CLOS3,频率范围为1.4GHz—1.6GHz)。在UPF业务网元实例化的过程中,资源管理组件负责为每个业务单元(也可以称为业务容器)分配CPU核资源。采用CPU管理技术进行性能增强后,资源管理组件会获取软件包中的业务单元的CPU核频率等级配置,在分配核资源时依据按需分配的原则,将CPU核映射到更高的CLOS等级,并设置对应的频率范围。
在UPF业务软件上对业务单元进行UPF结构的重新组织,增加对CPU核频率等级与运行频率范围的配置支持。业务软件包在网元实例化部署的过程中, 将业务层面的配置传递给负责CPU核管理的资源管理组件,由该组件下发频率配置到CPU及总线,完成软件与硬件的交互,并最终实现UPF软件架构下的功能单元按照设计的目标频率范围运行在CPU核上的效果。资源管理组件在业务进程与CPU核两者间起到了动态映射的作用。当UPF中的功能单元在CPU核间发生迁移时,仍可以通过动态下发配置的方式,有效保证业务进程按照设计的目标频率范围运行在新的CPU核上。
本申请实施例基于SST和CPU分级管理技术的动态调整CPU核频率及功耗的策略,并融入UPF网元中,以满足核间算力分配按照实际负载进行调节,可根据实际业务需求灵活定制和变化,并且,通过采用动态配置CPU核频率的算法,对CPU核进行分类管理,在产生核间的进程调度的前提下,通过动态调节CPU核的运行频率,解决了核间负载不均衡的问题。在部分高负载核达到性能瓶颈时,优先降低低优先级核的频率,提升高负载核频率,实现CPU算力与业务进程负载的动态匹配,进而提升UPF的报文转发性能。
在一实施例中,图6是本申请实施例提供的一种CPU核的分配***的结构框图。本实施例中的CPU核的分配***包括:网元610和资源管理组件620;其中,网元610与资源管理组件620建立通信连接;在资源管理组件620获取到网元610中至少一个业务单元的CPU核频率重配置信息的情况下,资源管理组件620根据CPU核频率重配置信息确定目标CPU核,并将CPU核分配至网元610。
本实施例的技术方案,通过获取网元中至少一个业务单元的CPU核频率重配置信息,并根据CPU核频率重配置信息动态调节CPU核的运行频率,以得到调整至目标频率范围的目标CPU核,并将目标CPU核分配至网元,解决了核间负载不均衡的技术问题,实现了CPU核总算力与业务进程负载的动态匹配,进而提升了网元的报文转发性能。
在一实施例中,CPU核的分配***,还包括:处理器;处理器与资源管理组件建立通信连接;资源管理组件将目标CPU核对应的核索引和CPU核重配置信息传输至处理器,以使CPU按照CPU核频率重配置信息和核索引对目标CPU核进行重配置。在实施例中,在资源管理组件确定目标CPU核之后,将目标CPU核对应的核索引和CPU核重配置信息传输至处理器,以使CPU按照CPU核频率重配置信息和核索引对目标CPU核进行重配置。
在一实施例中,网元包括:集群调度单元、网元管理单元、控制信令处理单元和数据转发单元;其中,集群调度单元、网元管理单元、控制信令处理单元和数据转发单元所对应的负载等级和频率范围均是不同的,并且,集群调度 单元、网元管理单元、控制信令处理单元和数据转发单元所对应的CPU核总算力是一个定值。
在一实施例中,集群调度单元,包括:云底座;网元管理单元,包括:操作维护单元;控制信令处理单元,包括:接口处理单元、通用业务单元和中央数据单元;数据转发单元,包括:报文转发单元。
在实施例中,集群调度单元、网元管理单元、控制信令处理单元和数据转发单元所对应的CPU核总算力是一个定值,在一个业务单元的运行频率提升的同时,其它业务单元的运行频率需降低,以保证CPU核总算力保持在一个定值。
在实施例中,按照运行的功能单元的类别对CPU核的运行频率进行不同的配置,以实现CPU核间的算力的动态分配。
在一实施例中,图7是本申请实施例提供的一种CPU核的负载等级、功能单元和业务单元之间的配置关系示意图。在实施例中,以网元为UPF为例,对CPU核的负载等级、功能单元和业务单元之间的配置关系进行说明。如图7所示,网元包括六个业务单元,分别为:TCF、OMU、IPU、GSU、CDU和PFU;并且,功能单元包括:集群调度单元、网元管理单元、控制信令处理单元和数据转发单元。其中,集群调度单元包括:TCF;网元管理单元,包括:OMU;控制信令处理单元,包括:IPU、GSU和CDU;数据转发单元,包括:PFU。其中,集群调度单元、网元管理单元、控制信令处理单元和数据转发单元所对应的负载等级分别为level0、level1、level2和level3。
在一实施例中,图8是本申请实施例提供的一种CPU管理技术对CPU频率动态调节的示意图。如图8所示,在未应用CPU管理技术的情况下,每个CPU核(即core 0、core 1……core n)的运行频率均为2.2GHz;在应用CPU管理技术的情况下,可以动态调整每个CPU核的运行频率,比如,core 0的运行频率从2.2GHz调整为1.5GHz的同时,将core 1的运行频率从2.2GHz调整为2.4GHz。本实施例在保持CPU整体功耗不变和整体总算力的情况下,增加了UPF动态配置CPU核频率的功能,实现CPU核间频率及功耗的动态分配,提升CPU运算能力的利用效率,提升UPF的转发性能,能够以相同的硬件配置实现更高的吞吐量性能,有效降低了同样吞吐量下的用户使用成本。
在一实施例中,图9是本申请实施例提供的一种CPU核的分配示意图。如图9所示,根据UPF业务软件包的规格,可以确定UPF中每个业务单元所对应 的负载等级和频率范围,并将每个业务单元的负载等级和频率范围发送至资源管理组件,以使资源管理组件根据每个业务单元的负载等级(也可以称为频率等级)和频率范围动态分配目标CPU核。示例性地,TCF对应的目标CPU核为core0-core3;OMU对应的目标CPU核为core4-core7;IPU对应的目标CPU核为core8-core11;GSU对应的目标CPU核为core12-core15;CDU对应的目标CPU核为core16-core19;PFU对应的目标CPU核为core20-core43。在资源管理组件确定每个业务单元对应的目标CPU核之后,将每个业务单元对应的目标CPU核分配至对应的业务单元,以及将每个CPU核的运行频率发送至CPU总线,以使CPU总线将目标CPU核的运行频率调整至对应的频率范围。
在一实施例中,图10是本申请实施例提供的一种CPU核的分配装置的结构框图。本实施例应用于资源管理组件。如图10所示,本实施例中的CPU核的分配装置包括:获取模块1010、确定模块1020和分配模块1030。
其中,获取模块1010,配置为获取网元中至少一个业务单元的CPU核频率重配置信息;确定模块1020,配置为根据CPU核频率重配置信息确定目标CPU核;分配模块1030,配置为将目标CPU核分配至网元。
在一实施例中,在将目标CPU核分配至网元之前,CPU核的分配装置,还包括:
传输模块,配置为将目标CPU核对应的核索引和CPU核频率重配置信息传输至CPU,以使CPU按照CPU核频率重配置信息和核索引对目标CPU核进行重配置;第一接收模块,配置为接收CPU反馈的成功传输消息。
在一实施例中,CPU核频率重配置信息包括下述之一:每个业务单元所对应CPU核的目标数量;每个业务单元所对应CPU核所属的目标负载等级;每个业务单元所对应CPU核所属的目标频率范围;其中,每个CPU核所属的目标负载等级与目标频率范围之间是一一对应的。
在一实施例中,确定模块1020,包括:
选取单元,配置为根据目标负载等级或目标频率范围选取目标数量的候选CPU核;第一确定单元,配置为根据候选CPU核的业务状态确定目标CPU核。
在一实施例中,确定模块1020,包括:
获取单元,配置为获取目标数量的空闲CPU核;调整单元,配置为将空闲CPU核的当前运行频率调整至目标频率范围;第二确定单元,配置为将调整至目标频率范围的空闲CPU核作为目标CPU核。
在一实施例中,选取单元,包括:
第一确定子单元,配置为根据目标负载等级或目标频率范围确定对应的目标服务级别;选取子单元,配置为根据目标服务级别选取目标数量的候选CPU核。
在一实施例中,确定单元,包括:
第二确定子单元,配置为响应于候选CPU核的业务状态为空闲态,将候选CPU核直接作为目标CPU核;第三确定子单元,配置为响应于候选CPU核的业务状态为运行态,释放业务单元所对应已分配CPU核,并将已分配CPU核的运行频率调整至目标频率范围,以及将已分配CPU核作为目标CPU核。
在一实施例中,在获取网元中至少一个业务单元的CPU核频率重配置信息之前,CPU核的分配装置,还包括:
发送模块,配置为向CPU发送默认负载等级和默认频率范围,以使CPU按照默认负载等级和默认频率范围对CPU核进行配置;第二接收模块,配置为接收CPU反馈的成功配置消息。
本实施例提供的CPU核的分配装置配置为实现图3所示实施例的应用于资源管理组件的CPU核的分配方法,本实施例提供的CPU核的分配装置实现原理和技术效果类似,此处不再赘述。
在一实施例中,图11是本申请实施例提供的另一种CPU核的分配装置的结构框图。本实施例应用于网元。如图11所示,本实施例中的CPU核的分配装置包括:确定模块1110、发送模块1120和接收模块1130。
确定模块1110,配置为根据待传输业务的业务特征确定网元中至少一个业务单元的CPU核频率重配置信息;发送模块1120,配置为将CPU核频率重配置信息发送至资源管理组件;接收模块1130,配置为接收资源管理组件分配的目标CPU核。
在一实施例中,CPU核频率重配置信息包括下述之一:每个业务单元所对应CPU核的目标数量;每个业务单元所对应CPU核所属的目标负载等级;每个业务单元所对应CPU核所属的目标频率范围;其中,每个CPU核所属的目标负载等级与目标频率范围之间是一一对应的。
本实施例提供的CPU核的分配装置配置为实现图4所示实施例的应用于网元的CPU核的分配方法,本实施例提供的CPU核的分配装置实现原理和技术效果类似,此处不再赘述。
在一实施例中,图12是本申请实施例提供的一种通信设备的结构示意图。如图12所示,本申请提供的设备,包括:处理器1210和存储器1220。该设备中处理器1210的数量可以是一个或者多个,图12中以一个处理器1210为例。该设备中存储器1220的数量可以是一个或者多个,图12中以一个存储器1220为例。该设备的处理器1210和存储器1220可以通过总线或者其他方式连接,图12中以通过总线连接为例。在该实施例中,该设备可以为资源管理组件。
存储器1220作为一种计算机可读存储介质,可配置为存储软件程序、计算机可执行程序以及模块,如本申请任意实施例的设备对应的程序指令/模块(例如,CPU核的分配装置中的获取模块1010、确定模块1020和分配模块1030)。存储器1220可包括存储程序区和存储数据区,其中,存储程序区可存储操作***、至少一个功能所需的应用程序;存储数据区可存储根据设备的使用所创建的数据等。此外,存储器1220可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储器1220可包括相对于处理器1210远程设置的存储器,这些远程存储器可以通过网络连接至设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
在通信设备为资源管理组件的情况下,上述提供的设备可配置为执行上述任意实施例提供的应用于资源管理组件的CPU核的分配方法,具备相应的功能和效果。
在通信设备为网元的情况下,上述提供的设备可配置为执行上述任意实施例提供的应用于网元的CPU核的分配方法,具备相应的功能和效果。
本申请实施例还提供一种包含计算机可执行指令的存储介质,计算机可执行指令在由计算机处理器执行时用于执行一种应用于资源管理组件的CPU核的分配方法,该方法包括:获取网元中至少一个业务单元的CPU核频率重配置信息;根据CPU核频率重配置信息确定目标CPU核;将目标CPU核分配至网元。
本申请实施例还提供一种包含计算机可执行指令的存储介质,计算机可执行指令在由计算机处理器执行时用于执行一种应用于网元的CPU核的分配方法,该方法包括:根据待传输业务的业务特征确定网元中至少一个业务单元的CPU核频率重配置信息;将CPU核频率重配置信息发送至资源管理组件;接收资源管理组件分配的目标CPU核。
本领域内的技术人员应明白,术语用户设备涵盖任何适合类型的无线用户设备,例如移动电话、便携数据处理装置、便携网络浏览器或车载移动台。
一般来说,本申请的多种实施例可以在硬件或专用电路、软件、逻辑或其任何组合中实现。例如,一些方面可以被实现在硬件中,而其它方面可以被实现在可以被控制器、微处理器或其它计算装置执行的固件或软件中,尽管本申请不限于此。
本申请的实施例可以通过移动装置的数据处理器执行计算机程序指令来实现,例如在处理器实体中,或者通过硬件,或者通过软件和硬件的组合。计算机程序指令可以是汇编指令、指令集架构(Instruction Set Architecture,ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码。
本申请附图中的任何逻辑流程的框图可以表示程序操作,或者可以表示相互连接的逻辑电路、模块和功能,或者可以表示程序操作与逻辑电路、模块和功能的组合。计算机程序可以存储在存储器上。存储器可以具有任何适合于本地技术环境的类型并且可以使用任何适合的数据存储技术实现,例如但不限于只读存储器(Read-Only Memory,ROM)、随机访问存储器(Random Access Memory,RAM)、光存储器装置和***(数码多功能光碟(Digital Video Disc,DVD)或光盘(Compact Disk,CD))等。计算机可读介质可以包括非瞬时性存储介质。数据处理器可以是任何适合于本地技术环境的类型,例如但不限于通用计算机、专用计算机、微处理器、数字信号处理器(Digital Signal Processing,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑器件(Field-Programmable Gate Array,FPGA)以及基于多核处理器架构的处理器。

Claims (15)

  1. 一种中央处理器CPU核的分配方法,应用于资源管理组件,包括:
    获取网元中至少一个业务单元的CPU核频率重配置信息;
    根据所述CPU核频率重配置信息确定目标CPU核;
    将所述目标CPU核分配至所述网元。
  2. 根据权利要求1所述的方法,在所述将所述目标CPU核分配至所述网元之前,还包括:
    将所述目标CPU核对应的核索引和所述CPU核频率重配置信息传输至CPU,以使所述CPU按照所述CPU核频率重配置信息和所述核索引对所述目标CPU核进行重配置;
    接收所述CPU反馈的成功传输消息。
  3. 根据权利要求1或2所述的方法,其中,所述CPU核频率重配置信息包括下述之一:每个业务单元所对应CPU核的目标数量;每个业务单元所对应CPU核所属的目标负载等级;每个业务单元所对应CPU核所属的目标频率范围;其中,每个CPU核所属的目标负载等级与目标频率范围之间是一一对应的。
  4. 根据权利要求3所述的方法,其中,所述根据所述CPU核频率重配置信息确定目标CPU核,包括:
    根据所述目标负载等级或所述目标频率范围选取所述目标数量的候选CPU核;
    根据所述候选CPU核的业务状态确定所述目标CPU核。
  5. 根据权利要求3所述的方法,其中,所述根据所述CPU核频率重配置信息确定目标CPU核,包括:
    获取所述目标数量的空闲CPU核;
    将所述空闲CPU核的当前运行频率调整至所述目标频率范围;
    将所述调整至所述目标频率范围的空闲CPU核作为所述目标CPU核。
  6. 根据权利要求4所述的方法,其中,所述根据所述目标负载等级或所述目标频率范围选取所述目标数量的候选CPU核,包括:
    根据所述目标负载等级或所述目标频率范围确定对应的目标服务级别;
    根据所述目标服务级别选取所述目标数量的候选CPU核。
  7. 根据权利要求4所述的方法,其中,所述根据所述候选CPU核的业务状态确定所述目标CPU核,包括:
    响应于所述候选CPU核的业务状态为空闲态,将所述候选CPU核直接作为所述目标CPU核;
    响应于所述候选CPU核的业务状态为运行态,释放所述业务单元所对应已分配CPU核,并将所述已分配CPU核的运行频率调整至所述目标频率范围,以及将所述已分配CPU核作为所述目标CPU核。
  8. 根据权利要求1所述的方法,在所述获取网元中至少一个业务单元的CPU核频率重配置信息之前,还包括:
    向CPU发送默认负载等级和默认频率范围,以使所述CPU按照所述默认负载等级和所述默认频率范围对所述CPU核进行配置;
    接收所述CPU反馈的成功配置消息。
  9. 一种中央处理器CPU核的分配方法,应用于网元,包括:
    根据待传输业务的业务特征确定所述网元中至少一个业务单元的CPU核频率重配置信息;
    将所述CPU核频率重配置信息发送至资源管理组件;
    接收所述资源管理组件分配的目标CPU核。
  10. 一种中央处理器CPU核的分配***,包括:网元和资源管理组件;其中,所述网元与所述资源管理组件建立通信连接;
    所述资源管理组件配置为在获取到所述网元中至少一个业务单元的CPU核频率重配置信息的情况下,根据所述CPU核频率重配置信息确定目标CPU核,并将所述CPU核分配至所述网元。
  11. 根据权利要求10所述的***,还包括:处理器;所述处理器与所述资源管理组件建立通信连接;
    所述资源管理组件还配置为将所述目标CPU核对应的核索引和所述CPU核重配置信息传输至所述处理器,以使所述CPU按照所述CPU核频率重配置信息和所述核索引对所述目标CPU核进行重配置。
  12. 根据权利要求10所述的***,其中,所述网元包括:集群调度单元、网元管理单元、控制信令处理单元和数据转发单元;
    其中,所述集群调度单元、所述网元管理单元、所述控制信令处理单元和所述数据转发单元所对应的负载等级和频率范围均是不同的,并且,所述集群调度单元、所述网元管理单元、所述控制信令处理单元和所述数据转发单元所对应的CPU核总算力是一个定值。
  13. 根据权利要求12所述的***,其中,所述集群调度单元,包括:云底座;所述网元管理单元,包括:操作维护单元;所述控制信令处理单元,包括:接口处理单元、通用业务单元和中央数据单元;所述数据转发单元,包括:报文转发单元。
  14. 一种通信设备,包括:存储器,以及至少一个处理器;
    所述存储器,配置为存储至少一个程序;
    当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现如上述权利要求1-8或9中任一项所述的中央处理器CPU核的分配方法。
  15. 一种存储介质,所述存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如上述权利要求1-8或9中任一项所述的中央处理器CPU核的分配方法。
PCT/CN2023/108098 2022-07-19 2023-07-19 Cpu核的分配方法、***、设备和存储介质 WO2024017285A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210849849.0A CN117453383A (zh) 2022-07-19 2022-07-19 Cpu核的分配方法、***、设备和存储介质
CN202210849849.0 2022-07-19

Publications (1)

Publication Number Publication Date
WO2024017285A1 true WO2024017285A1 (zh) 2024-01-25

Family

ID=89586169

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/108098 WO2024017285A1 (zh) 2022-07-19 2023-07-19 Cpu核的分配方法、***、设备和存储介质

Country Status (2)

Country Link
CN (1) CN117453383A (zh)
WO (1) WO2024017285A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107239345A (zh) * 2017-06-06 2017-10-10 深圳天珑无线科技有限公司 数据处理方法及装置
US20190065078A1 (en) * 2017-08-24 2019-02-28 Hewlett Packard Enterprise Development Lp Acquisition of iops and mbps limits independently at a scheduler in a scheduler hierarchy
CN114416357A (zh) * 2022-01-06 2022-04-29 北京百度网讯科技有限公司 容器组的创建方法、装置、电子设备和介质
CN114518940A (zh) * 2020-11-19 2022-05-20 北京希姆计算科技有限公司 任务调度电路、方法、电子设备及计算机可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107239345A (zh) * 2017-06-06 2017-10-10 深圳天珑无线科技有限公司 数据处理方法及装置
US20190065078A1 (en) * 2017-08-24 2019-02-28 Hewlett Packard Enterprise Development Lp Acquisition of iops and mbps limits independently at a scheduler in a scheduler hierarchy
CN114518940A (zh) * 2020-11-19 2022-05-20 北京希姆计算科技有限公司 任务调度电路、方法、电子设备及计算机可读存储介质
CN114416357A (zh) * 2022-01-06 2022-04-29 北京百度网讯科技有限公司 容器组的创建方法、装置、电子设备和介质

Also Published As

Publication number Publication date
CN117453383A (zh) 2024-01-26

Similar Documents

Publication Publication Date Title
US10678584B2 (en) FPGA-based method for network function accelerating and system thereof
US11669372B2 (en) Flexible allocation of compute resources
US11929927B2 (en) Network interface for data transport in heterogeneous computing environments
US11799952B2 (en) Computing resource discovery and allocation
US20180367460A1 (en) Data flow processing method and apparatus, and system
US8949847B2 (en) Apparatus and method for managing resources in cluster computing environment
US20190222518A1 (en) Technologies for network device load balancers for accelerated functions as a service
WO2019233322A1 (zh) 资源池的管理方法、装置、资源池控制单元和通信设备
US20100049892A1 (en) Method of routing an interrupt signal directly to a virtual processing unit in a system with one or more physical processing units
CN110427270B (zh) 一种面向rdma网络下分布式连接算子的动态负载均衡方法
CN112905305A (zh) 基于vpp的集群式虚拟化数据转发方法、装置及***
CN106325996A (zh) 一种gpu资源的分配方法及***
CN111404818B (zh) 一种面向通用多核网络处理器的路由协议优化方法
CN114328623A (zh) 芯片***中的数据传输处理方法及相关装置
WO2023020010A1 (zh) 一种运行进程的方法及相关设备
CN114710571A (zh) 数据包处理***
WO2020108337A1 (zh) 一种cpu资源调度方法及电子设备
CN106325995A (zh) 一种gpu资源的分配方法及***
CN116800616B (zh) 虚拟化网络设备的管理方法及相关装置
CN112422251B (zh) 数据传输方法及装置、终端、存储介质
WO2024017285A1 (zh) Cpu核的分配方法、***、设备和存储介质
CN114860387B (zh) 一种面向虚拟化存储应用的hba控制器i/o虚拟化方法
CN107819764B (zh) 面向c-ran的数据分发机制的演进方法
WO2022111466A1 (zh) 任务调度方法、控制方法、电子设备、计算机可读介质
US20220321403A1 (en) Programmable network segmentation for multi-tenant fpgas in cloud infrastructures

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23842347

Country of ref document: EP

Kind code of ref document: A1