EP3820083B1 - Physical function multiplexing method and apparatus and computer storage medium - Google Patents

Physical function multiplexing method and apparatus and computer storage medium Download PDF

Info

Publication number
EP3820083B1
EP3820083B1 EP19841350.2A EP19841350A EP3820083B1 EP 3820083 B1 EP3820083 B1 EP 3820083B1 EP 19841350 A EP19841350 A EP 19841350A EP 3820083 B1 EP3820083 B1 EP 3820083B1
Authority
EP
European Patent Office
Prior art keywords
network card
card resource
virtual network
vfs
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP19841350.2A
Other languages
German (de)
French (fr)
Other versions
EP3820083A4 (en
EP3820083A1 (en
Inventor
Yong Liu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Publication of EP3820083A1 publication Critical patent/EP3820083A1/en
Publication of EP3820083A4 publication Critical patent/EP3820083A4/en
Application granted granted Critical
Publication of EP3820083B1 publication Critical patent/EP3820083B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0806Configuration setting for initial configuration or provisioning, e.g. plug-and-play
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0895Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Definitions

  • the present disclosure relates to the field of communications, and more particularly, to a physical function multiplexing method and apparatus, and a computer storage medium.
  • PCI-SIG Peripheral Component Interconnect Special Interest Group
  • SR-IOV Single Root I/O Virtualization
  • OVDK network card virtualization technology
  • OVS OpenVswitch standard
  • DPDK Data Plane Development Kit
  • VF virtual function
  • VNFs virtual network functions
  • the OVDK technology is not limited by the number of VF like the SRIOV-Direct technology, and can virtualize hundreds of vNICs in theory, but network transmission performances of the OVDK technology are much inferior to that of the SRIOV-Direct due to the premise that the OVDK technology occupies the computing resources.
  • the OVDK technology is merely applied to signaling flow, control flow or some media flow transmission with low requirements on the network service quality.
  • these two virtualization technologies are used in a mixed manner; that is, in a same network element, the SRIOV-Direct is used for media flows with high throughput and low delay, while other transmissions with low throughput and delay requirements are networked by the OVDK with a low comprehensive cost.
  • the above two technologies may be used for different PFs to achieve the coexistence of network elements of different application types on the same server.
  • the number of high-performance PF resources of the server may be limited generally. For example, there may be only two network cards of 40 GE, and redundant backup for each other needs to be considered, so these two technologies cannot coexist, for meeting flexible application of the network elements.
  • independent claims provide a physical function multiplexing method and apparatus, and a computer storage medium.
  • a first aspect of the present disclosure provide a physical function multiplexing method applied at a network device, wherein the network device includes at least one virtual machine VM; a first VM in the at least one VM has at least one physical function PF; the first VM is any VM in the at least one VM; a first PF in the at least one PF is configured with at least two virtual functions VFs; and the first PF is any PF in the at least one PF, wherein the method includes: configuring at least one first VF in the at least two VFs to support an OVDK function, the first VF being any VF in the at least two VFs; and configuring other VF excluding the at least one first VF in the at least two VFs to support a single root I/O virtualization SR-IOV function.
  • a second aspect of the present disclosure further provide a physical function multiplexing apparatus applied to a network device, wherein the network device includes at least one virtual machine VM; a first VM in the at least one VM has at least one physical function PF; the first VM is any VM in the at least one VM; a first PF in the at least one PF is configured with at least two virtual functions VFs; and the first PF is any PF in the at least one PF, wherein the apparatus includes: a first configuration module configured to configure at least one first VF in the at least two VFs to support an OVDK function, the first VF being any VF in the at least two VFs; and a second configuration module configured to configure other VF excluding the at least one first VF in the at least two VFs to support a single root I/O virtualization SR-IOV function.
  • a third aspect of the present disclosure further provide a computer readable storage medium storing a computer program thereon, wherein the computer program is executable by a processor to implement the physical function multiplexing method mentioned above.
  • Embodiments of the present disclosure provide a physical function multiplexing method applied at a network device, wherein the network device includes at least one VM; a first VM in the at least one VM has at least one PF; the first VM is any VM in the at least one VM; a first PF in the at least one PF is configured with at least two VFs; and the first PF is any PF in the at least one PF.
  • the method mainly includes step 110 and step 120.
  • step 110 at least one first VF in the at least two VFs is configured to support an OVDK function, the first VF is any VF in the at least two VFs.
  • the at least one first VF is configured to load a DPDK driver, so that the at least one first VF supports the OVDK function.
  • the first VM may run on a physical server, the physical server may be loaded with the first PF, and the first PF may support the SR-IOV technology.
  • a user configures a first VM of the physical server through network management, and an operating system allocates a plurality of VFs to the first PF during initialization. The number of the VF may be obtained according to actual application requirements and properties of the first PF.
  • the physical server includes a computing node, and service processes supporting OVDK and SR-IOV data transmission are simultaneously loaded on the computing node.
  • another physical server may be taken as a computing node, and service processes supporting OVDK and SR-IOV data transmission are loaded on the computing node at the same time.
  • the computing node controls the OVDK service process to arbitrarily select one or more VFs from the plurality of VFs as at least one first VF, and loads the DPDK driver on the at least one first VF, so that the at least one first VF can transmit data based on the DPDK technology, and then links a virtual function module to a virtual switch, so that the virtual function module can provide a virtual switch type interface based on OVDK for a network element to use for data transmission.
  • the computing node may control the OVDK service process to select at least one VF to load the DPDK driver. For example, two VFs are selected to load the DPDK driver, and then the two VFs are linked to the virtual switch, so that the two VFs can transmit data based on the DPDK technology.
  • a physical server S1 includes one PF1, the PF1 may support the SR-IOV technology.
  • the user creates a virtual machine VM1 through the network management of the control node.
  • the operating system allocates 63 VFs (VF0 to VF62) to the PF1.
  • An OVDK service process and an SR-IOV service process are loaded on the physical server S1, and the computing node controls the OVDK service process to arbitrarily select any VF0 for association, loads the DPDK driver on VF0, and then links VF0 to the virtual switch, so that VF0 can provide a virtual switch type interface based on OVDK.
  • step 120 other VF excluding the at least one first VF in the at least two VFs is configured to support an SR-IOV function.
  • VF is configured to support the SR-IOV function, and a virtual network card service based on the SR-IOV technology is provided to the network element for data transmission.
  • the computing node controls the SR-IOV service process to be associated with the virtual functions VF1 to VF62, so that the virtual function VF1 to VF62 can provide the virtual network card services based on the SR-IOV technology for data transmission.
  • one PF is configured with a plurality of VFs, at least one VF is configured to support the OVDK function, and all the other VFs are configured to support the SR-IOV function, so that the PF can support the OVDK technology and the SR-IOV technology at the same time, thus reducing a hardware requirement of the network device, improving a flexibility of environment for using the network device, and reducing a cost of the network device.
  • the physical function multiplexing method provided by the embodiments of the present disclosure further includes the following step 130 to step 160.
  • step 130 first virtual network card resource information corresponding to the at least one first VF is generated, and the first virtual network card resource information is sent to a control node.
  • control node is associated with the first VM; and the user creates the first VM through a network management function of the control node.
  • the control node may be set on a physical server together with the computing node, or may be arranged differently from the computing node, that is, the control node is located on another physical server.
  • the first virtual network card resource includes a virtualized network resource that may be provided by the at least one first VF.
  • step 140 second virtual network card resource information corresponding to other VF excluding the at least one first VF in the at least two VFs is generated, and the second virtual network card resource information provided by any VF in the other VF is sent to the control node.
  • the second virtual network card resource includes a virtualized network resource that may be provided by any VF excluding the at least one first VF.
  • the computing node collects virtual network card resource information that may be provided on the first VM, wherein the virtual network card resource information includes PCI address information of a virtual network card inside the first VM.
  • the OVDK service process reports the virtual network card resource information of the first VF associated therewith to the control node, and the virtual network card resource may provide a virtual network card service based on the OVDK technology.
  • the SR-IOV service process reports the virtual network card resource information of all the other virtual functions VFs associated therewith to the control node, and then saves the virtual network card resource information in a resource pool.
  • the control node collects and stores all the reported virtual network card resource information.
  • the computing node controls the OVDK service process to report the virtual network card resource information of VF0 to the control node (e.g., denoted as Controller1)
  • the computing node controls the SR-IOV service process to report the virtual network card resource information of all other VFs (VF1 to VF62) to Controller1, and controls Controller1 to collect and store all the received virtual network card resource information.
  • step 150 the control node receives a virtual network card resource allocation request and generates a deployment command according to the first virtual network card resource information and/or the second virtual network card resource information.
  • the network element when a network element needs to use a network card for data transmission, the network element sends the virtual network card resource allocation request to the control node.
  • the control node receives the virtual network card resource allocation request from the network element, the control node queries locally stored virtual network card resource information which includes the first virtual network card resource information and the second virtual network card resource information, and generates a deployment command according to the virtual network card resource information.
  • a media gateway device M1 needs to transmit data, which includes transmission of a large amount of media data and interaction of flow control information.
  • the SR-IOV technology can provide services for the transmission of the large amount of media data
  • the OVDK technology can provide services for the interaction of the flow control information.
  • the Controller1 receives virtual network card resource allocation request information R1 sent from the media gateway device M1, and generates a deployment command O1 according to the virtual network card resource information stored locally.
  • step 160 the network element is deployed according to the deployment command.
  • the at least one first VF and any VF in the other VF are allocated to the network element for use.
  • the OVDK service process and the SR-IOV service process running on the computing node respectively allocate the at least one first VF supporting the OVDK service and any VF in the other VF supporting the SR-IOV service to the network element for use, according to the deployment command.
  • the OVDK service process allocates VF0 to the media gateway device M1 to provide an interactive service of the flow control information to the media gateway device M1
  • the SR-IOV service process allocates VF1 to the media gateway device M1 to provide a media data transmission service to the media gateway device M1 to complete the gateway deployment.
  • one PF is configured with a plurality of VFs, at least one VF is configured to support the OVDK function, and all the other VFs are configured to support the SR-IOV function, so that the physical function PF can support the OVDK technology and the SR-IOV technology at the same time, thus enabling an operator to improve an ability of deploying the network element required by hybrid network performances under the condition of using the least PF, reducing a hardware requirement for deploying the network element with both high forwarding performance and multiple virtual network cards, improving a flexibility of using environment of the operator, and reducing a cost of the network device.
  • the embodiments of the present disclosure provide a physical function multiplexing apparatus applied to a network device, wherein the network device includes at least one virtual machine VM; a first VM in the at least one VM has at least one physical function PF; the first VM is any VM in the at least one VM; a first PF in the at least one PF is configured with at least two virtual functions VFs; and the first PF is any PF in the at least one PF.
  • the apparatus includes a first configuration module and a second configuration module.
  • the first configuration module is configured to configure at least one first VF in the at least two VFs to support an OVDK function, the first VF is any VF in the at least two VFs.
  • the second configuration module is configured to configure other VF excluding the at least one first VF in the at least two VFs to support an SR-IOV function.
  • the first configuration module is specifically configured to: configure the at least one first VF to load a DPDK driver.
  • the first configuration module is further configured to: generate first virtual network card resource information provided by the at least one first VF, and send the first virtual network card resource information to a control node.
  • the second configuration module is further configured to: generate second virtual network card resource information corresponding to the other VF excluding the at least one first VF in the at least two VFs, and send the second virtual network card resource information provided by any VF in the other VF to the control node.
  • the apparatus further includes the control node configured to receive the virtual network card resource allocation request and generate a deployment command according to the first virtual network card resource information and/or the second virtual network card resource information; and a first configuration module and/or a second configuration module configured to allocate a network element according to the deployment command.
  • the first configuration module is specifically configured to: allocate the at least one first VF to the network element for use; and the second configuration module is specifically configured to: allocate any VF in the other VF to the network element for use.
  • one physical function PF is configured with a plurality of virtual functions VFs, at least one VF is configured to support the OVDK function, and all the other VFs are configured to support the SR-IOV function, so that the physical function PF can support the OVDK technology and the SR-IOV technology at the same time, thus enabling an operator to improve an ability of deploying the network element required by hybrid network performances under the condition of using the least PF, reducing a hardware requirement for deploying the network element with both high forwarding performance and multiple virtual network cards, improving a flexibility of a using environment by the operator, and reducing a cost of the network device.
  • the first configuration module and the second configuration module in the physical function multiplexing apparatus may be realized by a CPU, a DSP, an MCU or an FPGA in practical application.
  • the control node may be implemented by the physical server
  • the physical function multiplexing apparatus provided in the above embodiment performs physical function multiplexing, only the division of the above-mentioned program modules is exemplified.
  • the above-mentioned processing allocation may be completed by different program modules as required, that is, an internal structure of the apparatus is divided into different program modules to complete all or part of the above-described processing.
  • the physical function multiplexing apparatus provided in the above embodiments belongs to the same concept as the embodiments of the physical function multiplexing method, and the specific implementation process is detailed in the method embodiments, and will not be elaborated here.
  • Embodiments of the present disclosure further provide a physical function multiplexing method, which is applied at a virtual machine.
  • the virtual machine is loaded with an OVDK service process and an SR-IOV service process.
  • a network card multiplexing apparatus applied in the present disclosure is as follows: a virtual machine VM is created on a server, the virtual machine VM has a physical function PF1, and 63 virtual functions VFs are generated; the server is loaded with the OVDK service process and the SR-IOV service process, the OVDK service process is associated with VF0 and the SR-IOV service process is associated with VF1 to VF62, and PF1 is linked to a switch.
  • a first server is created with VM1, the first server is loaded with the OVDK service process and the SR-IOV service process, the OVDK service process is associated with PF2 and linked to a virtual switch; the SR-IOV service process is associated with PF3, and both PF2 and PF3 are linked to the switch.
  • a second server is created with VM2, PF1 is on VM2, and VF0 to VF62 are generated; the second server is loaded with the OVDK service process and the SR-IOV service process, the OVDK service process is associated with VF0, the SR-IOV service process is associated with VF1 to VF62, and PF1 is linked to the switch.
  • control flows during performing physical function multiplexing are shown in FIGs. 7 and 8 , respectively, and the steps are as follows.
  • a computing node on VM2 controls the OVDK service process to load a DPDK driver for VF0 and link the DPDK driver to the virtual switch.
  • a computing node on VM1 controls the SR-IOV service process to report virtualized network resources thereof to a control node, wherein the virtualized network resources refer to all network resources of PF3, and the control node saves the virtualized network resources to a resource pool.
  • the computing node on VM1 controls the OVDK service process to report virtualized network resources thereof to the control node, wherein the virtualized network resources refer to all network resources of PF2, and PF2 may provide a virtual network card service based on the OVDK technology on the first server.
  • the computing node on VM2 controls the SR-IOV service process to report virtualized network resources thereof to the control node, wherein the virtualized network resources refer to virtual functions VF1 to VF62 of PF1, and the control node saves the virtualized network resources to the resource pool.
  • the computing node on VM2 controls the OVDK service process to report virtualized network resources thereof to the control node, wherein the virtualized network resources refer to VF0, and VF0 may provide a virtual network card service based on the OVDK technology on VM2.
  • the user sends a management command to deploy VM1 and VM2 through the control node, wherein both VM1 and VM2 need to use two virtual network interfaces, and types of the virtual network interfaces include a high-performance forwarding SRIOV-direct type and a non-SRIOV common virtual switch tap type, that is, both VM1 and VM2 need to use two different virtual network card services.
  • control node queries that both VM1 and VM2 are capable to provide the above two virtual network card services at the same time.
  • the control node initiates a deployment control command to VM1 and VM2, and notifies the SR-IOV service process and the OVDK service process to allocate the network resources.
  • the computing node controls the SR-IOV service process and the OVDK service process to call PF2 and PF3 respectively for data exchange according to the control command.
  • the computing node controls the SR-IOV service process to call VF1 to VF62 for data exchange according to the control command, and the computing node controls the OVDK service process to call VF0 for data exchange.
  • VM1 and VM2 are deployed, and data transmission and intercommunication are successful.
  • VM1 calls PF2 and PF3 for data transmission, and VM2 uses PF1 for data transmission.
  • Embodiments of the present disclosure further disclose a physical function multiplexing apparatus, as shown in FIG. 9 .
  • the physical function multiplexing apparatus further includes PF4 on the second server, which provides a storage service through Hypervisor and is linked to the virtual switch.
  • the Hypervisor is a middleware layer running between a basic physical server and the operating system, and may access all physical devices including disks and memories on the server. The control of the physical function multiplexing according to the embodiments of the present disclosure is as described in the first specific example above.
  • one PF is configured with a plurality of virtual functions VFs, one VF is configured to support the OVDK function, and all the other VFs are configured to support the SR-IOV function, so that the physical function PF can support the OVDK technology and the SR-IOV technology at the same time, thus enabling an operator to improve an ability to deploy the network element required by hybrid network performances under the condition of using the least PF, reducing a hardware requirement for deploying the network element with both high forwarding performance and multiple virtual network cards, improving a flexibility of a using environment for the operator, and reducing a cost of the network device.
  • Embodiments of the present disclosure further provide a computer readable storage medium storing an executable program thereon, wherein the executable program is executable by a processor to implement any of the physical function multiplexing methods mentioned above.
  • the embodiments of the present disclosure further provide a physical function multiplexing apparatus, which includes: a processor and a memory configured to store a computer program operable on the processor, wherein the processor is configured to execute any of the physical function multiplexing methods mentioned above when running the computer program.
  • the non-volatile memory may be a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Ferromagnetic Random Access Memory (FRAM), a Flash Memory, a magnetic surface memory, an optical disc, or a Compact Disc Read-Only Memory (CD-ROM).
  • ROM Read Only Memory
  • PROM Programmable Read-Only Memory
  • EPROM Erasable Programmable Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • FRAM Ferromagnetic Random Access Memory
  • Flash Memory a Flash Memory
  • the magnetic surface memory may be a magnetic disc memory or a magnetic tape memory.
  • the volatile memory may be a Random Access Memory (RAM) that acts as an external high speed cache.
  • RAMs such as a Static Random Access Memory (SRAM), a Synchronous Static Random Access Memory (SSRAM), a Dynamic Random Access Memory (DRAM), a Synchronous Dynamic Random Access Memory (SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), an Enhanced Synchronous Dynamic Random Access Memor), an SyncLink Dynamic Random Access Memory (SLDRAM), and a Direct Rambus Random Access Memory (DRRAM).
  • SRAM Static Random Access Memory
  • SSRAM Synchronous Static Random Access Memory
  • DRAM Dynamic Random Access Memory
  • SDRAM Synchronous Dynamic Random Access Memory
  • DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • SLDRAM SyncLink Dynamic Random Access Memory
  • DRRAM Direct Rambus Random Access Memory
  • the method disclosed in the above embodiments of the present disclosure may be applied to a processor or implemented by the processor.
  • the processor may be an integrated circuit chip with a signal processing capacity.
  • the steps in the foregoing methods may be completed using an integrated logic circuit of hardware in the processor or an instruction in a form of software.
  • the above-mentioned processor may be a general-purpose processor, a DSP, or other programmable logic device, a discrete gate or a transistor logic device, a discrete hardware component, or the like.
  • the methods, steps, and logic diagrams disclosed in the embodiments of the present disclosure may be implemented or executed by the processor.
  • the general-purpose processor may be a microprocessor or any conventional processor, and the like.
  • Steps of the methods disclosed with reference to the embodiments of the present disclosure may be directly executed and accomplished by means of a hardware decoding processor or may be executed and accomplished using a combination of hardware and software modules in the decoding processor.
  • the software module may be located in a storage medium.
  • the storage medium is located in a memory.
  • the processor reads information in the memory and completes the steps of the foregoing methods in combination with the hardware of the processor.
  • the apparatus may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), FPGAs, general-purpose processors, controllers, MCUs, Microprocessors, or other electronic components for executing the aforementioned method.
  • ASICs Application Specific Integrated Circuits
  • DSPs Digital Signal processors
  • PLDs Programmable Logic Devices
  • CPLDs Complex Programmable Logic Devices
  • FPGAs general-purpose processors
  • controllers MCUs
  • Microprocessors Microprocessors
  • the disclosed device and method may be implemented in other manners.
  • the device embodiments above are schematic only, for example, the division of units is only a logical function division, and there may be other division modes in actual implementation, for example: a plurality of units or components may be combined, or may be integrated into another system, or some features may be omitted or not executed.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces.
  • the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
  • the above-mentioned units illustrated as separated parts may be or may not be separated physically, and the parts illustrated as units may be or may not be physical units. That is, the parts may be located at one place or distributed in multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions in the embodiments.
  • each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
  • the integrated units above may be implemented in the form of hardware, or in the form of hardware and software functional units.
  • the program may be stored in a computer-readable storage medium, and when being executed, the steps of the above-mentioned method embodiments are included.
  • the foregoing storage medium includes: any medium that is capable of storing program codes, such as a mobile storage device, a ROM, a RAM, a magnetic disk or an optical disc, etc.
  • the above integrated units according to the present disclosure may also be stored in a computer-readable storage medium if being implemented in the form of a software function module and sold or used as an independent product.
  • a computer device which may be a personal computer, a server, or a network device, etc.
  • the foregoing storage medium includes: any medium that is capable of storing program codes, such as a mobile storage device, a ROM, a RAM, a magnetic disk or an optical disc, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Description

    TECHNICAL FIELD
  • The present disclosure relates to the field of communications, and more particularly, to a physical function multiplexing method and apparatus, and a computer storage medium.
  • BACKGROUND
  • At present, more and more telecommunication operators use virtualization as an infrastructure of traditional networking. To improve virtualized communication performances of a physical network card, the Peripheral Component Interconnect Special Interest Group (PCI-SIG) proposed a Single Root I/O Virtualization (SR-IOV) technology, which has become a de facto industry standard of current network card hardware virtualization, since a wire speed may be reached without occupying physical server resources. Besides, due to cost reduction of computing resources of a server, another network card virtualization technology, i.e., a technology (hereinafter referred to as OVDK) that integrates an OpenVswitch standard (OVS) and a Data Plane Development Kit (DPDK), i.e., integrating a virtual switch technology of the Open Virtual switch standard OVS and a strategy of exchanging computing resources for the DPDK to use for network performances is developed, so as to improve high throughput and low delay of the virtualized network cards.
  • To achieve the above two objectives, it is required for a host to directly allocate a virtual function (VF) to virtual network functions (VNFs) by using the SRIOV-Direct technology, or to use a DPDK driver to occupy a physical function (PF) and a part of the computing resources for network communications of VNFs.
  • With regard to the SRIOV-direct technology, due to a limitation of the number of VF of a SRIOV network card, it is unable to allocate a large number of virtual network Interface cards (vNICs) for VNFs. However, the OVDK technology is not limited by the number of VF like the SRIOV-Direct technology, and can virtualize hundreds of vNICs in theory, but network transmission performances of the OVDK technology are much inferior to that of the SRIOV-Direct due to the premise that the OVDK technology occupies the computing resources. In telecommunication application scenarios, the OVDK technology is merely applied to signaling flow, control flow or some media flow transmission with low requirements on the network service quality. Therefore, in the industry at present, these two virtualization technologies are used in a mixed manner; that is, in a same network element, the SRIOV-Direct is used for media flows with high throughput and low delay, while other transmissions with low throughput and delay requirements are networked by the OVDK with a low comprehensive cost.
  • Therefore, the above two technologies may be used for different PFs to achieve the coexistence of network elements of different application types on the same server. However, the number of high-performance PF resources of the server may be limited generally. For example, there may be only two network cards of 40 GE, and redundant backup for each other needs to be considered, so these two technologies cannot coexist, for meeting flexible application of the network elements.
  • In view of the above, it is desirable to have a method for multiplexing these two technologies, so as to deploy network elements using the SRIOV-direct and the OVDK technologies on a server with only one PF. At present, there is no example of deploying the network elements using the SRIOV-direct and the OVDK technologies on the server with only one PF.
  • SUMMARY
  • To solve the existing technical problems, independent claims provide a physical function multiplexing method and apparatus, and a computer storage medium.
  • To achieve the above objectives, technical solutions of the embodiments of the present disclosure are implemented as follows.
  • A first aspect of the present disclosure provide a physical function multiplexing method applied at a network device, wherein the network device includes at least one virtual machine VM; a first VM in the at least one VM has at least one physical function PF; the first VM is any VM in the at least one VM; a first PF in the at least one PF is configured with at least two virtual functions VFs; and the first PF is any PF in the at least one PF, wherein the method includes: configuring at least one first VF in the at least two VFs to support an OVDK function, the first VF being any VF in the at least two VFs; and configuring other VF excluding the at least one first VF in the at least two VFs to support a single root I/O virtualization SR-IOV function.
  • A second aspect of the present disclosure further provide a physical function multiplexing apparatus applied to a network device, wherein the network device includes at least one virtual machine VM; a first VM in the at least one VM has at least one physical function PF; the first VM is any VM in the at least one VM; a first PF in the at least one PF is configured with at least two virtual functions VFs; and the first PF is any PF in the at least one PF, wherein the apparatus includes: a first configuration module configured to configure at least one first VF in the at least two VFs to support an OVDK function, the first VF being any VF in the at least two VFs; and a second configuration module configured to configure other VF excluding the at least one first VF in the at least two VFs to support a single root I/O virtualization SR-IOV function.
  • A third aspect of the present disclosure further provide a computer readable storage medium storing a computer program thereon, wherein the computer program is executable by a processor to implement the physical function multiplexing method mentioned above.
  • BRIEF DESCRIPTION OF THE DRAWINGS
    • FIG. 1 is a first flow chart of a physical function multiplexing method according to an embodiment of the present disclosure;
    • FIG. 2 is a second flow chart of the physical function multiplexing method according to the embodiment of the present disclosure;
    • FIG. 3 is a first composition diagram of a physical function multiplexing apparatus according to an embodiment of the present disclosure;
    • FIG. 4 is a second composition diagram of the physical function multiplexing apparatus according to the embodiment of the present disclosure;
    • FIG. 5 is a third composition diagram of the physical function multiplexing apparatus according to the embodiment of the present disclosure;
    • FIG. 6 is a fourth composition diagram of the physical function multiplexing apparatus according to the embodiment of the present disclosure;
    • FIG. 7 is a first control diagram of a physical function multiplexing method based on FIG. 6 according to an embodiment of the present disclosure;
    • FIG. 8 is a second control diagram of the physical function multiplexing method based on FIG. 6 according to the embodiment of the present disclosure; and
    • FIG. 9 is a fifth composition diagram of the physical function multiplexing apparatus according to the embodiment of the present disclosure.
    DETAILED DESCRIPTION
  • The present disclosure will be further described hereinafter in detail with reference to the drawings and the specific embodiments.
  • Embodiments of the present disclosure provide a physical function multiplexing method applied at a network device, wherein the network device includes at least one VM; a first VM in the at least one VM has at least one PF; the first VM is any VM in the at least one VM; a first PF in the at least one PF is configured with at least two VFs; and the first PF is any PF in the at least one PF. As shown in FIG. 1, the method mainly includes step 110 and step 120.
  • In step 110: at least one first VF in the at least two VFs is configured to support an OVDK function, the first VF is any VF in the at least two VFs.
  • Here, the at least one first VF is configured to load a DPDK driver, so that the at least one first VF supports the OVDK function.
  • Specifically, the first VM may run on a physical server, the physical server may be loaded with the first PF, and the first PF may support the SR-IOV technology. A user configures a first VM of the physical server through network management, and an operating system allocates a plurality of VFs to the first PF during initialization. The number of the VF may be obtained according to actual application requirements and properties of the first PF.
  • Further, when the first VM is created, a plurality of VFs are configured at the same time. The physical server includes a computing node, and service processes supporting OVDK and SR-IOV data transmission are simultaneously loaded on the computing node. In addition, another physical server may be taken as a computing node, and service processes supporting OVDK and SR-IOV data transmission are loaded on the computing node at the same time. The computing node controls the OVDK service process to arbitrarily select one or more VFs from the plurality of VFs as at least one first VF, and loads the DPDK driver on the at least one first VF, so that the at least one first VF can transmit data based on the DPDK technology, and then links a virtual function module to a virtual switch, so that the virtual function module can provide a virtual switch type interface based on OVDK for a network element to use for data transmission.
  • In practical application, the computing node may control the OVDK service process to select at least one VF to load the DPDK driver. For example, two VFs are selected to load the DPDK driver, and then the two VFs are linked to the virtual switch, so that the two VFs can transmit data based on the DPDK technology.
  • For example, a physical server S1 includes one PF1, the PF1 may support the SR-IOV technology. The user creates a virtual machine VM1 through the network management of the control node. When initializing the virtual machine VM1, the operating system allocates 63 VFs (VF0 to VF62) to the PF1. An OVDK service process and an SR-IOV service process are loaded on the physical server S1, and the computing node controls the OVDK service process to arbitrarily select any VF0 for association, loads the DPDK driver on VF0, and then links VF0 to the virtual switch, so that VF0 can provide a virtual switch type interface based on OVDK.
  • In step 120: other VF excluding the at least one first VF in the at least two VFs is configured to support an SR-IOV function.
  • Specifically, except for the at least one first VF that already supports the OVDK function, other VF is configured to support the SR-IOV function, and a virtual network card service based on the SR-IOV technology is provided to the network element for data transmission.
  • For example, the computing node controls the SR-IOV service process to be associated with the virtual functions VF1 to VF62, so that the virtual function VF1 to VF62 can provide the virtual network card services based on the SR-IOV technology for data transmission.
  • In this way, one PF is configured with a plurality of VFs, at least one VF is configured to support the OVDK function, and all the other VFs are configured to support the SR-IOV function, so that the PF can support the OVDK technology and the SR-IOV technology at the same time, thus reducing a hardware requirement of the network device, improving a flexibility of environment for using the network device, and reducing a cost of the network device.
  • The physical function multiplexing method provided by the embodiments of the present disclosure, as shown in FIG. 2, further includes the following step 130 to step 160.
  • In step 130: first virtual network card resource information corresponding to the at least one first VF is generated, and the first virtual network card resource information is sent to a control node.
  • Here, the control node is associated with the first VM; and the user creates the first VM through a network management function of the control node. The control node may be set on a physical server together with the computing node, or may be arranged differently from the computing node, that is, the control node is located on another physical server.
  • Specifically, the first virtual network card resource includes a virtualized network resource that may be provided by the at least one first VF.
  • In step 140: second virtual network card resource information corresponding to other VF excluding the at least one first VF in the at least two VFs is generated, and the second virtual network card resource information provided by any VF in the other VF is sent to the control node.
  • Specifically, the second virtual network card resource includes a virtualized network resource that may be provided by any VF excluding the at least one first VF.
  • Specifically, the computing node collects virtual network card resource information that may be provided on the first VM, wherein the virtual network card resource information includes PCI address information of a virtual network card inside the first VM. The OVDK service process reports the virtual network card resource information of the first VF associated therewith to the control node, and the virtual network card resource may provide a virtual network card service based on the OVDK technology. The SR-IOV service process reports the virtual network card resource information of all the other virtual functions VFs associated therewith to the control node, and then saves the virtual network card resource information in a resource pool. The control node collects and stores all the reported virtual network card resource information.
  • For example, after the steps in the above embodiment, the computing node controls the OVDK service process to report the virtual network card resource information of VF0 to the control node (e.g., denoted as Controller1), the computing node controls the SR-IOV service process to report the virtual network card resource information of all other VFs (VF1 to VF62) to Controller1, and controls Controller1 to collect and store all the received virtual network card resource information.
  • In step 150: the control node receives a virtual network card resource allocation request and generates a deployment command according to the first virtual network card resource information and/or the second virtual network card resource information.
  • Specifically, when a network element needs to use a network card for data transmission, the network element sends the virtual network card resource allocation request to the control node. When the control node receives the virtual network card resource allocation request from the network element, the control node queries locally stored virtual network card resource information which includes the first virtual network card resource information and the second virtual network card resource information, and generates a deployment command according to the virtual network card resource information.
  • For example, a media gateway device M1 needs to transmit data, which includes transmission of a large amount of media data and interaction of flow control information. The SR-IOV technology can provide services for the transmission of the large amount of media data, and the OVDK technology can provide services for the interaction of the flow control information. The Controller1 receives virtual network card resource allocation request information R1 sent from the media gateway device M1, and generates a deployment command O1 according to the virtual network card resource information stored locally.
  • In step 160: the network element is deployed according to the deployment command.
  • Here, the at least one first VF and any VF in the other VF are allocated to the network element for use.
  • Specifically, after the computing node receives the deployment command, the OVDK service process and the SR-IOV service process running on the computing node respectively allocate the at least one first VF supporting the OVDK service and any VF in the other VF supporting the SR-IOV service to the network element for use, according to the deployment command.
  • For example, after the physical server S1 receives the deployment command O1, the OVDK service process allocates VF0 to the media gateway device M1 to provide an interactive service of the flow control information to the media gateway device M1, and the SR-IOV service process allocates VF1 to the media gateway device M1 to provide a media data transmission service to the media gateway device M1 to complete the gateway deployment.
  • In this way, one PF is configured with a plurality of VFs, at least one VF is configured to support the OVDK function, and all the other VFs are configured to support the SR-IOV function, so that the physical function PF can support the OVDK technology and the SR-IOV technology at the same time, thus enabling an operator to improve an ability of deploying the network element required by hybrid network performances under the condition of using the least PF, reducing a hardware requirement for deploying the network element with both high forwarding performance and multiple virtual network cards, improving a flexibility of using environment of the operator, and reducing a cost of the network device.
  • As shown in FIG. 3, the embodiments of the present disclosure provide a physical function multiplexing apparatus applied to a network device, wherein the network device includes at least one virtual machine VM; a first VM in the at least one VM has at least one physical function PF; the first VM is any VM in the at least one VM; a first PF in the at least one PF is configured with at least two virtual functions VFs; and the first PF is any PF in the at least one PF. The apparatus includes a first configuration module and a second configuration module. The first configuration module is configured to configure at least one first VF in the at least two VFs to support an OVDK function, the first VF is any VF in the at least two VFs. The second configuration module is configured to configure other VF excluding the at least one first VF in the at least two VFs to support an SR-IOV function.
  • In one embodiment, the first configuration module is specifically configured to: configure the at least one first VF to load a DPDK driver.
  • In one embodiment, the first configuration module is further configured to: generate first virtual network card resource information provided by the at least one first VF, and send the first virtual network card resource information to a control node. The second configuration module is further configured to: generate second virtual network card resource information corresponding to the other VF excluding the at least one first VF in the at least two VFs, and send the second virtual network card resource information provided by any VF in the other VF to the control node.
  • In one embodiment, as shown in FIG. 4, the apparatus further includes the control node configured to receive the virtual network card resource allocation request and generate a deployment command according to the first virtual network card resource information and/or the second virtual network card resource information; and a first configuration module and/or a second configuration module configured to allocate a network element according to the deployment command.
  • In one embodiment, the first configuration module is specifically configured to: allocate the at least one first VF to the network element for use; and the second configuration module is specifically configured to: allocate any VF in the other VF to the network element for use.
  • The apparatus embodiments of the present disclosure are implemented with reference to the above-mentioned method embodiments of the present disclosure.
  • In this way, one physical function PF is configured with a plurality of virtual functions VFs, at least one VF is configured to support the OVDK function, and all the other VFs are configured to support the SR-IOV function, so that the physical function PF can support the OVDK technology and the SR-IOV technology at the same time, thus enabling an operator to improve an ability of deploying the network element required by hybrid network performances under the condition of using the least PF, reducing a hardware requirement for deploying the network element with both high forwarding performance and multiple virtual network cards, improving a flexibility of a using environment by the operator, and reducing a cost of the network device.
  • In the above embodiments of the present disclosure, the first configuration module and the second configuration module in the physical function multiplexing apparatus may be realized by a CPU, a DSP, an MCU or an FPGA in practical application. The control node may be implemented by the physical server
  • It should be noted that: when the physical function multiplexing apparatus provided in the above embodiment performs physical function multiplexing, only the division of the above-mentioned program modules is exemplified. In practical application, the above-mentioned processing allocation may be completed by different program modules as required, that is, an internal structure of the apparatus is divided into different program modules to complete all or part of the above-described processing. In addition, the physical function multiplexing apparatus provided in the above embodiments belongs to the same concept as the embodiments of the physical function multiplexing method, and the specific implementation process is detailed in the method embodiments, and will not be elaborated here.
  • First specific example
  • Embodiments of the present disclosure further provide a physical function multiplexing method, which is applied at a virtual machine. The virtual machine is loaded with an OVDK service process and an SR-IOV service process. As shown in FIG. 5, a network card multiplexing apparatus applied in the present disclosure is as follows: a virtual machine VM is created on a server, the virtual machine VM has a physical function PF1, and 63 virtual functions VFs are generated; the server is loaded with the OVDK service process and the SR-IOV service process, the OVDK service process is associated with VF0 and the SR-IOV service process is associated with VF1 to VF62, and PF1 is linked to a switch.
  • As shown in FIG. 6, a first server is created with VM1, the first server is loaded with the OVDK service process and the SR-IOV service process, the OVDK service process is associated with PF2 and linked to a virtual switch; the SR-IOV service process is associated with PF3, and both PF2 and PF3 are linked to the switch. A second server is created with VM2, PF1 is on VM2, and VF0 to VF62 are generated; the second server is loaded with the OVDK service process and the SR-IOV service process, the OVDK service process is associated with VF0, the SR-IOV service process is associated with VF1 to VF62, and PF1 is linked to the switch.
  • In some embodiments of the present disclosure, control flows during performing physical function multiplexing are shown in FIGs. 7 and 8, respectively, and the steps are as follows.
  • In S201: a computing node on VM2 controls the OVDK service process to load a DPDK driver for VF0 and link the DPDK driver to the virtual switch.
  • In S202: a computing node on VM1 controls the SR-IOV service process to report virtualized network resources thereof to a control node, wherein the virtualized network resources refer to all network resources of PF3, and the control node saves the virtualized network resources to a resource pool.
  • In S203: the computing node on VM1 controls the OVDK service process to report virtualized network resources thereof to the control node, wherein the virtualized network resources refer to all network resources of PF2, and PF2 may provide a virtual network card service based on the OVDK technology on the first server.
  • In S204: the computing node on VM2 controls the SR-IOV service process to report virtualized network resources thereof to the control node, wherein the virtualized network resources refer to virtual functions VF1 to VF62 of PF1, and the control node saves the virtualized network resources to the resource pool.
  • In S205: the computing node on VM2 controls the OVDK service process to report virtualized network resources thereof to the control node, wherein the virtualized network resources refer to VF0, and VF0 may provide a virtual network card service based on the OVDK technology on VM2.
  • In S301: the user sends a management command to deploy VM1 and VM2 through the control node, wherein both VM1 and VM2 need to use two virtual network interfaces, and types of the virtual network interfaces include a high-performance forwarding SRIOV-direct type and a non-SRIOV common virtual switch tap type, that is, both VM1 and VM2 need to use two different virtual network card services.
  • In S302: the control node queries that both VM1 and VM2 are capable to provide the above two virtual network card services at the same time.
  • In S303: the control node initiates a deployment control command to VM1 and VM2, and notifies the SR-IOV service process and the OVDK service process to allocate the network resources. On VM1, the computing node controls the SR-IOV service process and the OVDK service process to call PF2 and PF3 respectively for data exchange according to the control command. On VM2, the computing node controls the SR-IOV service process to call VF1 to VF62 for data exchange according to the control command, and the computing node controls the OVDK service process to call VF0 for data exchange.
  • In S304: VM1 and VM2 are deployed, and data transmission and intercommunication are successful. VM1 calls PF2 and PF3 for data transmission, and VM2 uses PF1 for data transmission.
  • Second specific example
  • Embodiments of the present disclosure further disclose a physical function multiplexing apparatus, as shown in FIG. 9. In addition to those described in the first specific example above, the physical function multiplexing apparatus further includes PF4 on the second server, which provides a storage service through Hypervisor and is linked to the virtual switch. The Hypervisor is a middleware layer running between a basic physical server and the operating system, and may access all physical devices including disks and memories on the server. The control of the physical function multiplexing according to the embodiments of the present disclosure is as described in the first specific example above.
  • In this way, one PF is configured with a plurality of virtual functions VFs, one VF is configured to support the OVDK function, and all the other VFs are configured to support the SR-IOV function, so that the physical function PF can support the OVDK technology and the SR-IOV technology at the same time, thus enabling an operator to improve an ability to deploy the network element required by hybrid network performances under the condition of using the least PF, reducing a hardware requirement for deploying the network element with both high forwarding performance and multiple virtual network cards, improving a flexibility of a using environment for the operator, and reducing a cost of the network device.
  • Embodiments of the present disclosure further provide a computer readable storage medium storing an executable program thereon, wherein the executable program is executable by a processor to implement any of the physical function multiplexing methods mentioned above.
  • The embodiments of the present disclosure further provide a physical function multiplexing apparatus, which includes: a processor and a memory configured to store a computer program operable on the processor, wherein the processor is configured to execute any of the physical function multiplexing methods mentioned above when running the computer program.
  • It is to be understood that the memory may be implemented by any type of volatile or non-volatile memory devices, or a combination thereof. The non-volatile memory may be a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Ferromagnetic Random Access Memory (FRAM), a Flash Memory, a magnetic surface memory, an optical disc, or a Compact Disc Read-Only Memory (CD-ROM). The magnetic surface memory may be a magnetic disc memory or a magnetic tape memory. The volatile memory may be a Random Access Memory (RAM) that acts as an external high speed cache. By way of exemplary rather not restrictive illustration, a variety of forms of RAMs are available, such as a Static Random Access Memory (SRAM), a Synchronous Static Random Access Memory (SSRAM), a Dynamic Random Access Memory (DRAM), a Synchronous Dynamic Random Access Memory (SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), an Enhanced Synchronous Dynamic Random Access Memor), an SyncLink Dynamic Random Access Memory (SLDRAM), and a Direct Rambus Random Access Memory (DRRAM). The memories described in the embodiments of the present disclosure are intended to include, but are not limited to, these and any other suitable types of memories.
  • The method disclosed in the above embodiments of the present disclosure may be applied to a processor or implemented by the processor. The processor may be an integrated circuit chip with a signal processing capacity. In an implementation process, the steps in the foregoing methods may be completed using an integrated logic circuit of hardware in the processor or an instruction in a form of software. The above-mentioned processor may be a general-purpose processor, a DSP, or other programmable logic device, a discrete gate or a transistor logic device, a discrete hardware component, or the like. The methods, steps, and logic diagrams disclosed in the embodiments of the present disclosure may be implemented or executed by the processor. The general-purpose processor may be a microprocessor or any conventional processor, and the like. Steps of the methods disclosed with reference to the embodiments of the present disclosure may be directly executed and accomplished by means of a hardware decoding processor or may be executed and accomplished using a combination of hardware and software modules in the decoding processor. The software module may be located in a storage medium. The storage medium is located in a memory. The processor reads information in the memory and completes the steps of the foregoing methods in combination with the hardware of the processor.
  • In the embodiments, the apparatus may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), FPGAs, general-purpose processors, controllers, MCUs, Microprocessors, or other electronic components for executing the aforementioned method.
  • In the several embodiments provided in the present application, it should be understood that the disclosed device and method may be implemented in other manners. The device embodiments above are schematic only, for example, the division of units is only a logical function division, and there may be other division modes in actual implementation, for example: a plurality of units or components may be combined, or may be integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces. The indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
  • The above-mentioned units illustrated as separated parts may be or may not be separated physically, and the parts illustrated as units may be or may not be physical units. That is, the parts may be located at one place or distributed in multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions in the embodiments.
  • In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units above may be implemented in the form of hardware, or in the form of hardware and software functional units.
  • Those having ordinary skills in the art should understand that all or a part of the steps of implementing the foregoing embodiments may be implemented by instructing relevant hardware through a program. The program may be stored in a computer-readable storage medium, and when being executed, the steps of the above-mentioned method embodiments are included. The foregoing storage medium includes: any medium that is capable of storing program codes, such as a mobile storage device, a ROM, a RAM, a magnetic disk or an optical disc, etc.
  • Alternately, the above integrated units according to the present disclosure may also be stored in a computer-readable storage medium if being implemented in the form of a software function module and sold or used as an independent product. Based on such understanding, the essence of the technical solutions of the embodiments of the present disclosure, or the part contributing to the prior art, may be embodied in the form of a software product which is stored in a storage medium including a number of instructions such that a computer device (which may be a personal computer, a server, or a network device, etc.) performs all or part of the method described in each of the embodiments of the present disclosure. The foregoing storage medium includes: any medium that is capable of storing program codes, such as a mobile storage device, a ROM, a RAM, a magnetic disk or an optical disc, etc.
  • The foregoing descriptions are merely detailed embodiments of the present disclosure, but the protection scope of the present disclosure is not limited thereto. Any person skilled in the art can easily think of changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered by the protection scope of the present disclosure. Therefore, the protection scope of the present disclosure should be subjected to the protection scope of the claims.

Claims (11)

  1. A physical function multiplexing method, characterized in that, the method is applied at a network device, and the network device comprises at least one virtual machine VM; a first VM in the at least one VM has at least one physical function PF; the first VM is any VM in the at least one VM; a first PF in the at least one PF is configured with at least two virtual functions VFs, and the first PF is any PF in the at least one PF; wherein the method comprises:
    configuring (110) at least one first VF in the at least two VFs to support an OpenVswitch standard and a Data Plane Development Kit, OVDK, function, the first VF being any VF in the at least two VFs; and
    configuring (120) other VF excluding the at least one first VF in the at least two VFs to support a single root I/O virtualization SR-IOV function.
  2. The method according to claim 1, further comprising:
    generating (130) first virtual network card resource information corresponding to the at least one first VF;
    sending the first virtual network card resource information to a control node;
    generating (140) second virtual network card resource information corresponding to the other VF excluding the at least one first VF in the at least two VFs; and
    sending the second virtual network card resource information provided by any VF in the other VF to the control node.
  3. The method according to claim 2, further comprising:
    receiving (150), by the control node, a virtual network card resource allocation request and generating a deployment command according to the first virtual network card resource information and/or the second virtual network card resource information; and
    allocating (160) network card resource according to the deployment command.
  4. The method according to claim 3, wherein the allocating network card resource according to the deployment command, comprises:
    allocating the at least one first VF and any VF in the other VF to a network element corresponding to the virtual network card resource allocation request.
  5. The method according to claim 1, wherein the configuring (110) at least one first VF in the at least two VFs to support the OVDK function, comprises:
    configuring the at least one first VF to load a DPDK driver.
  6. A physical function multiplexing apparatus, characterized in that, the apparatus is applied to a network device, and the network device comprises at least one virtual machine VM; a first VM in the at least one VM has at least one physical function PF; the first VM is any VM in the at least one VM; a first PF in the at least one PF is configured with at least two virtual functions VFs, and the first PF is any PF in the at least one PF; wherein the apparatus comprises:
    a first configuration module configured to configure at least one first VF in the at least two VFs to support an OpenVswitch standard and a Data Plane Development Kit, OVDK, function, the first VF being any VF in the at least two VFs; and
    a second configuration module configured to configure other VF excluding the at least one first VF in the at least two VFs to support a single root I/O virtualization SR-IOV function.
  7. The apparatus according to claim 6, wherein,
    the first configuration module is further configured to: generate first virtual network card resource information provided by the at least one first VF, and send the first virtual network card resource information to a control node; and
    the second configuration module is further configured to: generate second virtual network card resource information corresponding to the other VF excluding the at least one first VF in the at least two VFs; and send the second virtual network card resource information provided by any VF in the other VF to the control node.
  8. The apparatus according to claim 7, further comprising:
    the control node, configured to receive a virtual network card resource allocation request and generate a deployment command according to the first virtual network card resource information and/or the second virtual network card resource information; wherein,
    the first configuration module and/or the second configuration module is configured to allocate network card resource according to the deployment command.
  9. The apparatus according to claim 8, wherein,
    the first configuration module is configured to: allocate the at least one first VF to a network element corresponding to the virtual network card resource allocation request; and
    the second configuration module is configured to: allocate any VF in the other VF to the network element corresponding to the virtual network card resource allocation request.
  10. The apparatus according to claim 6, wherein the first configuration module is configured to: configure the at least one first VF to load a DPDK driver.
  11. A computer readable medium storing a computer program thereon, wherein the computer program, when executed by a processor, executes the method according to any one of claims 1 to 5.
EP19841350.2A 2018-07-23 2019-07-03 Physical function multiplexing method and apparatus and computer storage medium Active EP3820083B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810810147.5A CN110752937B (en) 2018-07-23 2018-07-23 Physical function multiplexing method and device and computer storage medium
PCT/CN2019/094467 WO2020019950A1 (en) 2018-07-23 2019-07-03 Physical function multiplexing method and apparatus and computer storage medium

Publications (3)

Publication Number Publication Date
EP3820083A1 EP3820083A1 (en) 2021-05-12
EP3820083A4 EP3820083A4 (en) 2021-08-04
EP3820083B1 true EP3820083B1 (en) 2023-03-01

Family

ID=69181267

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19841350.2A Active EP3820083B1 (en) 2018-07-23 2019-07-03 Physical function multiplexing method and apparatus and computer storage medium

Country Status (3)

Country Link
EP (1) EP3820083B1 (en)
CN (1) CN110752937B (en)
WO (1) WO2020019950A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113535319A (en) * 2020-04-09 2021-10-22 深圳致星科技有限公司 Method, equipment and storage medium for realizing multiple RDMA network card virtualization
CN113778626A (en) * 2021-08-31 2021-12-10 山石网科通信技术股份有限公司 Hot plug processing method and device for virtual network card, storage medium and processor
CN114844744B (en) * 2022-03-04 2023-07-21 阿里巴巴(中国)有限公司 Virtual private cloud network configuration method and device, electronic equipment and computer readable storage medium
US20240004679A1 (en) * 2022-06-29 2024-01-04 Microsoft Technology Licensing, Llc Accelerating Networking by Multiplexing Driver Data Paths

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104272288B (en) * 2012-06-08 2018-01-30 英特尔公司 For realizing the method and system of virtual machine VM Platform communication loopbacks
US9734096B2 (en) * 2013-05-06 2017-08-15 Industrial Technology Research Institute Method and system for single root input/output virtualization virtual functions sharing on multi-hosts
CN104461958B (en) * 2014-10-31 2018-08-21 华为技术有限公司 Support storage resource access method, storage control and the storage device of SR-IOV
US9898430B2 (en) * 2014-11-12 2018-02-20 Vmware, Inc. Tracking virtual machine memory modified by a single root I/O virtualization (SR-IOV) device
CN105790991A (en) * 2014-12-24 2016-07-20 中兴通讯股份有限公司 Link aggregation method and system for virtualization server and intelligent network adapter thereof
CN105808167B (en) * 2016-03-10 2018-12-21 深圳市杉岩数据技术有限公司 A kind of method, storage equipment and the system of the link clone based on SR-IOV
CN106250211A (en) * 2016-08-05 2016-12-21 浪潮(北京)电子信息产业有限公司 A kind of virtualization implementation method based on SR_IOV
CN107643938A (en) * 2017-08-24 2018-01-30 中国科学院计算机网络信息中心 Data transmission method, device and storage medium

Also Published As

Publication number Publication date
EP3820083A4 (en) 2021-08-04
EP3820083A1 (en) 2021-05-12
WO2020019950A1 (en) 2020-01-30
CN110752937A (en) 2020-02-04
CN110752937B (en) 2022-04-15

Similar Documents

Publication Publication Date Title
EP3820083B1 (en) Physical function multiplexing method and apparatus and computer storage medium
US20210004258A1 (en) Method and Apparatus for Creating Virtual Machine
WO2017152633A1 (en) Port binding implementation method and device
RU2640724C1 (en) Method of troubleshooting process, device and system based on virtualization of network functions
WO2015196931A1 (en) Disk io-based virtual resource allocation method and device
US11301303B2 (en) Resource pool processing to determine to create new virtual resource pools and storage devices based on currebt pools and devices not meeting SLA requirements
JP6680901B2 (en) Management method and device
CN106557444B (en) Method and device for realizing SR-IOV network card and method and device for realizing dynamic migration
CN103609077B (en) Method, apparatus and system for data transmission, and physical adapter
JP2016541072A (en) Resource processing method, operating system, and device
EP3240238B1 (en) System and method for reducing management ports of a multiple node chassis system
CN108132827B (en) Network slice resource mapping method, related equipment and system
EP4044507A1 (en) Network resource management method and system, network equipment and readable storage medium
CN113312142A (en) Virtualization processing system, method, device and equipment
KR102684903B1 (en) Network operation methods, devices, facilities and storage media
US9755986B1 (en) Techniques for tightly-integrating an enterprise storage array into a distributed virtualized computing environment
CN114448978B (en) Network access method and device, electronic equipment and storage medium
CN107807840B (en) Equipment direct connection method and device applied to virtual machine network
JP6878570B2 (en) Methods and devices for resource reconfiguration
CN110795202A (en) Resource allocation method and device of virtualized cluster resource management system
CN116800616B (en) Management method and related device of virtualized network equipment
CN113127144B (en) Processing method, processing device and storage medium
US10216599B2 (en) Comprehensive testing of computer hardware configurations
WO2017070963A1 (en) Method, apparatus, and system for deploying virtual resources
US20210157626A1 (en) Prioritizing booting of virtual execution environments

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20210127

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: ZTE CORPORATION

A4 Supplementary search report drawn up and despatched

Effective date: 20210702

RIC1 Information provided on ipc code assigned before grant

Ipc: H04L 12/24 20060101AFI20210628BHEP

Ipc: G06F 9/455 20180101ALI20210628BHEP

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 602019025973

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: H04L0012240000

Ipc: H04L0041080600

RIC1 Information provided on ipc code assigned before grant

Ipc: G06F 9/455 20180101ALI20220203BHEP

Ipc: H04L 41/0806 20220101AFI20220203BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20220502

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20221122

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

Ref country code: AT

Ref legal event code: REF

Ref document number: 1551794

Country of ref document: AT

Kind code of ref document: T

Effective date: 20230315

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602019025973

Country of ref document: DE

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG9D

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20230301

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230530

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230301

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230601

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230301

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230301

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230301

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230301

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230301

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230301

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230301

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230602

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230301

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230301

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230301

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230703

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230301

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230301

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: IE

Payment date: 20230719

Year of fee payment: 5

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230301

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230701

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: BE

Payment date: 20230719

Year of fee payment: 5

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602019025973

Country of ref document: DE

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230301

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230301

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 602019025973

Country of ref document: DE

26N No opposition filed

Effective date: 20231204

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230301

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230301

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230703

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20230703

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230703

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20240201

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230731

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230703

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230301

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230731