CN114584541B - Method for accelerating virtual machine network - Google Patents

Method for accelerating virtual machine network Download PDF

Info

Publication number
CN114584541B
CN114584541B CN202210221851.3A CN202210221851A CN114584541B CN 114584541 B CN114584541 B CN 114584541B CN 202210221851 A CN202210221851 A CN 202210221851A CN 114584541 B CN114584541 B CN 114584541B
Authority
CN
China
Prior art keywords
virtual machine
kernel
vhost
data packet
receiving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210221851.3A
Other languages
Chinese (zh)
Other versions
CN114584541A (en
Inventor
杨燚
孙思清
高传集
李彦君
肖雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Cloud Information Technology Co Ltd
Original Assignee
Inspur Cloud Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Cloud Information Technology Co Ltd filed Critical Inspur Cloud Information Technology Co Ltd
Priority to CN202210221851.3A priority Critical patent/CN114584541B/en
Publication of CN114584541A publication Critical patent/CN114584541A/en
Application granted granted Critical
Publication of CN114584541B publication Critical patent/CN114584541B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/34Network arrangements or protocols for supporting network services or applications involving the movement of software or configuration parameters 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/09Mapping addresses
    • H04L61/10Mapping addresses of different types
    • H04L61/103Mapping addresses of different types across network layers, e.g. resolution of network layer into physical layer addresses or address resolution protocol [ARP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention provides a method for accelerating virtual machine network, which comprises node modification of tap interface kernel drive, address vector setting of receiving and transmitting data packet, virtual machine creation, vhost kernel drive storage of receiving and transmitting data packet address vector, DMA data transmission session, sender address vector setting of DMA controller, confirmation of whether data packet buffer can be recycled, user mode DMA setting receiving address vector; once deployed, the technical scheme not only realizes the memory super-allocation of the virtual machines, but also can super-allocate the number of the virtual machines, because the CPU resources occupied by vhost kernel threads can be greatly reduced, more CPU resources are moved to run more virtual machines; meanwhile, the user mode virtual access and kernel mode virtual machine nodes can be mixed as required, virtual machines of any one node can be seamlessly migrated to any other node, and the problem and the challenge of deploying the user mode virtual switch are solved.

Description

Method for accelerating virtual machine network
Technical Field
The invention relates to the technical field of cloud computing virtual networks, in particular to a method for accelerating a virtual machine network.
Background
The user-mode virtual switch uses the large-page physical memory to realize the rapid data packet transmission with the network interface of the virtual machine, but the precondition is that the virtual machine also needs to use the large-page physical memory, which is basically unacceptable for public cloud scenes, because the memory of the virtual machine sold in general is over-matched, i.e. if the physical machine has 32GB of memory, the memory of all the virtual machines on the physical machine can be sold to 64GB or more, and if only the large-page physical memory can be used, the total memory of the virtual machines sold can not exceed 32GB. In addition, the current user mode virtual machine cannot be migrated to the traditional kernel mode virtual switch. These two drawbacks are the enormous problems faced by current consumer state virtual switches.
Disclosure of Invention
The embodiment of the invention provides a method for accelerating a virtual machine network, which can improve the security of an access server.
A method of accelerating a virtual machine network comprising:
modifying nodes driven by a tap interface kernel;
setting an address vector of a receiving and transmitting data packet;
creating a virtual machine;
vhost kernel driver stores the address vector of the receiving and transmitting data packet;
A DMA data transfer session;
Setting a transmitting end address vector of the DMA controller;
confirming whether the data packet buffer can be recycled or not;
the user mode DMA sets the receive address vector.
Alternatively, the process may be carried out in a single-stage,
In the setting process of the address vector of the receiving and transmitting data packet, the user mode virtual switch needs to use the interface exposed by the user mode driver to initialize the DMA controller and set the sending and receiving address vector.
Alternatively, the process may be carried out in a single-stage,
In the process of creating the virtual machine, the virtual switch uses a user-mode virtual switch, but a traditional tap interface is used instead of vhostuser in the process of creating the virtual machine, so that the created virtual machine can be ensured to be migrated seamlessly on the nodes of the kernel-mode virtual switch and the nodes of the user-mode virtual switch.
Alternatively, the process may be carried out in a single-stage,
In the process of storing the address vector of the received and transmitted data packet by the vhost kernel driver, the processing of modifying the tap interface kernel driver of the node where the processing is located is required firstly, and the vhost kernel driver is required, the address vector of the received and transmitted data packet is required to be set by the user state virtual switch through the ioctl interface exposed by the tap interface driver, the modified vhost kernel driver can take the address vector of the received and transmitted data packet stored by the tap interface driver, the vhost kernel driver is required to call the kernel-state DMA controller API interface to set the address vector of the receiving and transmitting buffer of the virtual machine end, and the user state virtual machine switch and the vhost kernel thread can trigger the DMA data transmission session through setting the received and transmitted address vector of the DMA controller.
Alternatively, the process may be carried out in a single-stage,
In the confirmation process of whether the data packet buffer can be recycled, in the case of being sent to the virtual machine from the user state virtual switch, the user state virtual switch sets a sending end address vector of the DMA controller through a user state DMA controller API, and the vhost kernel thread sets a virtual machine receiving address vector through a kernel state DMA controller API and triggers a DMA data transmission session. Once the transmission is completed, the corresponding status flags are updated, and the user state virtual switch needs to poll the flag bits to determine whether the packet buffer can be recycled.
Alternatively, the process may be carried out in a single-stage,
In the process of setting the receiving address vector by the user mode DMA, for the data packet transferred from the virtual machine to the user mode switch, the vhost kernel thread needs to set the address vector sent by the data packet from the virtual machine by driving the exposed API through the DMA controller in kernel mode, and the user mode virtual switch sets the receiving address vector in advance through the API exposed by the DMA controller in user mode, so vhost can trigger the data transfer from the virtual machine to the user mode virtual switch, once the vhost kernel driver informs the virtual machine that the receiving completion flag is set, the user mode virtual switch knows that the received data packet can be processed by polling the receiving flag bit, and thus other receiving processing flows in the user mode virtual switch can be continued.
Alternatively, the process may be carried out in a single-stage,
And the user state virtual switch sets the large page physical memory of the virtual switch as a cache for carrying out data packet exchange with the virtual machine through the ioctl API, and determines whether the cache is sent out or received out through polling the flag bit of the cache.
Alternatively, the process may be carried out in a single-stage,
The user state virtual switch must initialize a DMA controller on a node, and tell the tap interface driver and vhost kernel driver how to transfer the data packet of the virtual machine to the buffer set by the user state virtual switch through DMA, and how to send the network data packet sent by the user state virtual switch to the network interface of the virtual machine through DMA by calling ioctl API.
Alternatively, the process may be carried out in a single-stage,
The vhost is a kernel thread which is executed by the CPU of the host where the virtual machine is located, in this patent, the vhost kernel thread is only responsible for setting the DMA controller to tell it what data needs to be transferred, and the real output transfer work is completed by the DMA controller, so that precious host CPU resources are saved, and the saved CPU can be sold to more virtual machines.
Compared with the prior art, the invention has the beneficial effects that:
in the embodiment of the invention, the method for accelerating the virtual machine network is formed by the processes of node modification driven by a tap interface kernel, address vector setting of a transceiving data packet, creation of a virtual machine, vhost kernel driving storage of the transceiving data packet address vector, DMA data transmission session, sending end address vector setting of a DMA controller, confirmation of whether a data packet buffer can be recycled, user mode DMA setting receiving address vector and the like; meanwhile, the user mode virtual access and kernel mode virtual machine nodes can be mixed as required, virtual machines of any one node can be seamlessly migrated to any other node, and the problem and the challenge of deploying the user mode virtual switch are solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a method for accelerating virtual machine networking according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for accelerating virtual machine networking in accordance with an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments, and all other embodiments obtained by those skilled in the art without making any inventive effort based on the embodiments of the present invention are within the scope of protection of the present invention.
Referring to fig. 1-2, the present invention provides a technical solution: a method of accelerating a virtual machine network, the method of accelerating a virtual machine network comprising:
modifying nodes driven by a tap interface kernel;
setting an address vector of a receiving and transmitting data packet;
creating a virtual machine;
vhost kernel driver stores the address vector of the receiving and transmitting data packet;
A DMA data transfer session;
Setting a transmitting end address vector of the DMA controller;
confirming whether the data packet buffer can be recycled or not;
the user mode DMA sets the receive address vector.
By requiring the use of the user-mode virtual switch, the user-mode virtual switch needs to use a large page physical memory as a buffer memory for receiving and transmitting packets when the tap interface is opened, so that the user-mode virtual switch is ensured to be free from making data packet copies again.
And requires that the DMA control area must have a custom driver that the custom virtual switch needs to use to initialize the DMA controller and set the transmit receive address vector.
The virtual switch uses a user mode virtual switch, but a traditional tap interface is used instead of vhostuser when the virtual machine is created, so that the created virtual machine can be ensured to be migrated seamlessly at any place on the nodes of the kernel mode virtual switch and the nodes of the user mode virtual switch.
The tap interface kernel driver of the node is required to be modified, and the vhost kernel driver is required to be modified. The user-state virtual switch needs to set the address vector of the transceiving data packets through the ioctl interface which is exposed by the tap interface driver, the modified vhost kernel driver can take the address vector of the transceiving data packets stored by the tap interface driver, the vhost kernel driver needs to call the kernel-state DMA controller API interface to set the address vector of the receiving and sending buffer memory of the virtual machine end, and the user-state virtual machine switch and the vhost kernel thread can trigger the DMA data transmission session by setting the transceiving address vector of the DMA controller.
For the case of sending from the user mode virtual switch to the virtual machine, the user mode virtual switch sets the sender address vector of the DMA controller through the user mode DMA controller API, and the vhost kernel thread sets the virtual machine receiving address vector through the kernel mode DMA controller API and triggers the DMA data transfer session. Once the transmission is completed, the corresponding status flags are updated, and the user state virtual switch needs to poll the flag bits to determine whether the packet buffer can be recycled.
For the data packet transferred from the virtual machine to the user-mode virtual switch, vhost kernel thread needs to drive the exposed API through the kernel-mode DMA controller to set the address vector sent by the data packet from the virtual machine, and the user-mode virtual switch sets the receiving address vector in advance through the user-mode DMA controller exposed API, so vhost can trigger the data transfer from the virtual machine to the user-mode virtual switch, once the kernel driver informs the virtual machine that the receiving completion flag is set after the setting is finished vhost, the user-mode virtual switch knows that the received data packet can be processed by polling the receiving flag bit, and thus other receiving processing flows in the user-mode virtual switch can be continued.
Because the DMA controller can process the transmission of multiple data packets in one data transmission session, in order to ensure that as many data packets as possible can be transmitted in one session, the user state virtual switch and the vhost kernel thread need to aggregate the data packets in a batch mode to trigger the DMA data transmission session, and the user state virtual switch and the vhost kernel thread can cooperatively configure the received and transmitted address vector and then trigger one DMA data transmission session. The number of DMA data transmission sessions can be greatly reduced, and the number of data packets processed by each session is increased, so that the network performance is further improved.
Aiming at the performance problem of exchanging network data packets between a user-mode virtual switch and a virtual machine tap port, the user-mode virtual switch sets a large page physical memory of the virtual switch as a cache for exchanging the data packets with the virtual machine through an ioctl API, and determines whether the cache is completely transmitted or completely received through polling a flag bit of the cache. The user state virtual switch must initialize a DMA controller on a node, and tell the tap interface driver and vhost kernel driver how to transfer the data packet of the virtual machine to the buffer set by the user state virtual switch through DMA by calling the ioctl API, and how to send the network data packet sent by the user state virtual switch to the network interface of the virtual machine through DMA, so that the mode of matching the DMA controller and the large page physical memory of the user state virtual switch and modifying the tap interface driver and vhost kernel driver avoids at least two memory copies, namely vhost copies to the receiving queue of the tap interface and one copy to the large page buffer of the user state virtual switch. Meanwhile, the DMA is used, so that the instruction period wasted by the CPU executing the memory copy is avoided, the CPU use is reduced, vhost is allowed to occupy too little CPU resources, and the CPU resources can be sold to other virtual machines.
Once deployed, the technical scheme not only realizes the memory super-allocation of the virtual machines, but also can super-allocate the number of the virtual machines, because the CPU resources occupied by vhost kernel threads can be greatly reduced, more CPU resources are moved to run more virtual machines; meanwhile, the user mode virtual access and kernel mode virtual machine nodes can be mixed as required, virtual machines of any one node can be seamlessly migrated to any other node, and the problem and the challenge of deploying the user mode virtual switch are solved.
The method for accelerating the virtual machine network is composed of the processes of node modification driven by a tap interface kernel, address vector setting of a transceiving data packet, creation of a virtual machine, vhost kernel driving and storage of the transceiving data packet address vector, DMA data transmission session, sending end address vector setting of a DMA controller, confirmation of whether a data packet buffer can be recycled, user mode DMA setting and receiving of the address vector and the like, and can be used for super-allocating memory for the virtual machine, allowing seamless migration of the virtual machine between a user mode virtual switch node and a kernel mode virtual switch node, thereby releasing CPU calculation force, super-selling more virtual machines and vcpus, and ensuring that the virtual machine has no strong demand of binding CPU, so that the CPU calculation force required by the virtual machine can be flexibly scheduled without binding on a fixed CPU.
The content of information interaction and execution process between the units in the device is based on the same conception as the embodiment of the method of the present invention, and specific content can be referred to the description in the embodiment of the method of the present invention, which is not repeated here.
The present invention also provides a rights metadata distributed initialization apparatus storing instructions for causing a computer to perform a method of rights metadata distributed initialization as described herein. Specifically, a system or apparatus provided with a storage medium on which a software program code realizing the functions of any of the above embodiments is stored, and a computer (or CPU or MPU) of the system or apparatus may be caused to read out and execute the program code stored in the storage medium.
In this case, the program code itself read from the storage medium may realize the functions of any of the above-described embodiments, and thus the program code and the storage medium storing the program code form part of the present invention.
Examples of storage media for providing program code include floppy disks, hard disks, magneto-optical disks, optical disks (e.g., CD-ROMs, CD-R, CD-RWs, DVD-ROMs, DVD-RAMs, DVD-RWs, DVD+RWs), magnetic tapes, nonvolatile memory cards, and ROMs. Alternatively, the program code may be downloaded from a server computer by a communication network.
Further, it should be apparent that the functions of any of the above-described embodiments may be implemented not only by executing the program code read out by the computer, but also by causing an operating system or the like operating on the computer to perform part or all of the actual operations based on the instructions of the program code.
Further, it is understood that the program code read out by the storage medium is written into a memory provided in an expansion board inserted into a computer or into a memory provided in an expansion unit connected to the computer, and then a CPU or the like mounted on the expansion board or the expansion unit is caused to perform part and all of actual operations based on instructions of the program code, thereby realizing the functions of any of the above embodiments.
It should be noted that not all the steps and modules in the above flowcharts and the system configuration diagrams are necessary, and some steps or modules may be omitted according to actual needs. The execution sequence of the steps is not fixed and can be adjusted as required. The system structure described in the above embodiments may be a physical structure or a logical structure, that is, some modules may be implemented by the same physical entity, or some modules may be implemented by multiple physical entities, or may be implemented jointly by some components in multiple independent devices.
In the above embodiments, the hardware unit may be mechanically or electrically implemented. For example, a hardware unit may include permanently dedicated circuitry or logic (e.g., a dedicated processor, FPGA, or ASIC) to perform the corresponding operations. The hardware unit may also include programmable logic or circuitry (e.g., a general-purpose processor or other programmable processor) that may be temporarily configured by software to perform the corresponding operations. The particular implementation (mechanical, or dedicated permanent, or temporarily set) may be determined based on cost and time considerations.
While the invention has been illustrated and described in detail in the drawings and in the preferred embodiments, the invention is not limited to the disclosed embodiments, and it will be appreciated by those skilled in the art that the code audits of the various embodiments described above may be combined to produce further embodiments of the invention, which are also within the scope of the invention.

Claims (2)

1. A method of accelerating a virtual machine network, the method comprising:
modifying nodes driven by a tap interface kernel;
setting an address vector of a receiving and transmitting data packet;
creating a virtual machine;
vhost kernel driver stores the address vector of the receiving and transmitting data packet;
A DMA data transfer session;
Setting a transmitting end address vector of the DMA controller;
confirming whether the data packet buffer can be recycled or not;
Setting a receiving address vector by a user mode DMA (direct memory access);
in the setting process of the address vector of the receiving and transmitting data packet, a user state virtual switch needs to initialize a DMA controller by using an exposed interface of the user state drive, and sets a sending and receiving address vector;
In the process of storing the address vectors of the received and transmitted data packets, the vhost kernel driver firstly requires to modify the tap interface kernel driver of the node where the user state virtual switch is located, and the vhost kernel driver needs to set the address vectors of the received and transmitted data packets through the ioctl interface exposed by the tap interface driver, the modified vhost kernel driver can take the address vectors of the received and transmitted data packets stored by the tap interface driver, the vhost kernel driver needs to call the kernel-state DMA controller API interface to set the address vectors of the receiving and transmitting buffer of the virtual machine end, and the user state virtual machine switch and the vhost kernel thread can trigger the DMA data transmission session by setting the received and transmitted address vectors of the DMA controller;
In the confirmation process of whether the data packet buffer can be recycled, in the case of being sent to the virtual machine from the user state virtual switch, the user state virtual switch sets a sending end address vector of the DMA controller through a user state DMA controller API, and the vhost kernel thread sets a virtual machine receiving address vector through a kernel state DMA controller API and triggers a DMA data transmission session;
Once the transmission is completed, the corresponding state flag is updated, and the user state virtual switch needs to poll the flag bits to determine whether the data packet buffer can be recycled;
In the process of setting the receiving address vector by the user mode DMA, for the data packet transferred from the virtual machine to the user mode switch, the vhost kernel thread needs to set the address vector sent by the data packet from the virtual machine by driving the exposed API through the DMA controller in kernel mode, and the user mode virtual switch sets the receiving address vector in advance through the API exposed by the DMA controller in user mode, so vhost can trigger the data transfer from the virtual machine to the user mode virtual switch, once the transfer is completed, the vhost kernel driver informs the virtual machine of setting the receiving completion flag, and the user mode virtual switch knows that the received data packet can be processed by polling the receiving flag bit, so that other receiving processing flows in the user mode virtual switch can be continued.
2. A method of accelerating a virtual machine network according to claim 1, wherein:
And the user state virtual switch sets a large page physical memory of the virtual switch as a cache for carrying out data packet exchange with the virtual machine through ioctlAPI, and determines whether the cache is sent or received completely through polling a flag bit of the cache.
CN202210221851.3A 2022-03-07 2022-03-07 Method for accelerating virtual machine network Active CN114584541B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210221851.3A CN114584541B (en) 2022-03-07 2022-03-07 Method for accelerating virtual machine network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210221851.3A CN114584541B (en) 2022-03-07 2022-03-07 Method for accelerating virtual machine network

Publications (2)

Publication Number Publication Date
CN114584541A CN114584541A (en) 2022-06-03
CN114584541B true CN114584541B (en) 2024-06-04

Family

ID=81774379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210221851.3A Active CN114584541B (en) 2022-03-07 2022-03-07 Method for accelerating virtual machine network

Country Status (1)

Country Link
CN (1) CN114584541B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115576654B (en) * 2022-11-17 2023-03-10 苏州浪潮智能科技有限公司 Request processing method, device, equipment and storage medium
CN115858103B (en) * 2023-02-27 2023-06-09 珠海星云智联科技有限公司 Method, device and medium for virtual machine hot migration of open stack architecture

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101465863A (en) * 2009-01-14 2009-06-24 北京航空航天大学 Method for implementing high-efficiency network I/O in kernel virtual machine circumstance
CN102497434A (en) * 2011-12-16 2012-06-13 中国科学院计算技术研究所 Establishing method of kernel state virtual network equipment and packet transmitting and receiving methods thereof
CN103428226A (en) * 2013-08-30 2013-12-04 天津汉柏汉安信息技术有限公司 Method and system for communication of user state and inner core
CN109901909A (en) * 2019-01-04 2019-06-18 中国科学院计算技术研究所 Method and virtualization system for virtualization system
CN114020406A (en) * 2021-10-28 2022-02-08 郑州云海信息技术有限公司 Method, device and system for accelerating I/O of virtual machine by cloud platform

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10635474B2 (en) * 2016-05-09 2020-04-28 Marvell Asia Pte, Ltd. Systems and methods for virtio based optimization of data packet paths between a virtual machine and a network device for live virtual machine migration

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101465863A (en) * 2009-01-14 2009-06-24 北京航空航天大学 Method for implementing high-efficiency network I/O in kernel virtual machine circumstance
CN102497434A (en) * 2011-12-16 2012-06-13 中国科学院计算技术研究所 Establishing method of kernel state virtual network equipment and packet transmitting and receiving methods thereof
CN103428226A (en) * 2013-08-30 2013-12-04 天津汉柏汉安信息技术有限公司 Method and system for communication of user state and inner core
CN109901909A (en) * 2019-01-04 2019-06-18 中国科学院计算技术研究所 Method and virtualization system for virtualization system
CN114020406A (en) * 2021-10-28 2022-02-08 郑州云海信息技术有限公司 Method, device and system for accelerating I/O of virtual machine by cloud platform

Also Published As

Publication number Publication date
CN114584541A (en) 2022-06-03

Similar Documents

Publication Publication Date Title
CN114584541B (en) Method for accelerating virtual machine network
EP3719657A1 (en) Communication with accelerator via rdma-based network adapter
CN107515775B (en) Data transmission method and device
US20070041383A1 (en) Third party node initiated remote direct memory access
US11431681B2 (en) Application aware TCP performance tuning on hardware accelerated TCP proxy services
CN103942178A (en) Communication method between real-time operating system and non-real-time operating system on multi-core processor
CA3169613C (en) Proxy service through hardware acceleration using an io device
WO2012135234A2 (en) Facilitating, at least in part, by circuitry, accessing of at least one controller command interface
CN111211999A (en) OVS-based real-time virtual network implementation method
CN113067849B (en) Network communication optimization method and device based on Glusterfs
US20200358721A1 (en) Buffer allocation for parallel processing of data
JPH11327815A (en) Communication control method/device and communication system
WO2024040846A1 (en) Data processing method and apparatus, electronic device, and storage medium
CN113810397A (en) Protocol data processing method and device
US10178041B2 (en) Technologies for aggregation-based message synchronization
CN116455836A (en) Intelligent network card, cloud server and traffic forwarding method
CN110519242A (en) Data transmission method and device
MacArthur et al. An efficient method for stream semantics over rdma
CN115269326A (en) Task processing method, device, medium and equipment based on chip monitoring system
WO2018106392A1 (en) Technologies for multi-core wireless network data transmission
Bie et al. Vhost-User
US7139832B2 (en) Data transfer and intermission between parent and child process
CN109165099B (en) Electronic equipment, memory copying method and device
WO2024103891A1 (en) Data processing method and apparatus
Kang et al. Design and implementation of kernel S/W for TCP/IP offload engine (TOE)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant