CN116662223A - Data transmission method, system and electronic equipment - Google Patents

Data transmission method, system and electronic equipment Download PDF

Info

Publication number
CN116662223A
CN116662223A CN202310652971.3A CN202310652971A CN116662223A CN 116662223 A CN116662223 A CN 116662223A CN 202310652971 A CN202310652971 A CN 202310652971A CN 116662223 A CN116662223 A CN 116662223A
Authority
CN
China
Prior art keywords
data transmission
host
virtual machine
address
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310652971.3A
Other languages
Chinese (zh)
Inventor
陈义全
王一静
靳珍
付晓燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202310652971.3A priority Critical patent/CN116662223A/en
Publication of CN116662223A publication Critical patent/CN116662223A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1027Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Computer And Data Communications (AREA)

Abstract

The application discloses a data transmission method, a data transmission system and electronic equipment. Wherein the method comprises the following steps: acquiring a storage queue written with a data transmission request to a virtual machine, wherein the data transmission request is used for requesting a storage device to transmit data to be transmitted; converting a virtual address of a storage queue in a virtual machine into a physical address; mapping the physical address obtained by conversion to a memory address in a host; and writing the data transmission request into a target queue associated with the memory address, wherein the data transmission request is read from the target queue by a storage device, and the storage device is used for responding to the data transmission request and transmitting data to be transmitted. The application solves the technical problem of low data transmission efficiency.

Description

Data transmission method, system and electronic equipment
Technical Field
The present application relates to the field of computers, and in particular, to a data transmission method, system and electronic device.
Background
At present, a storage virtualization method can be used for storing data, and the current storage virtualization is realized through software simulation or hardware acceleration. However, for the method of software simulation, a large amount of central processing unit (Central Processing Unit simply referred to as CPU) resources are consumed; for the method of hardware acceleration, additional hardware is required to be added, so that the method has the defect of high time consumption in the process of calculation and storage, and the technical problem of low data transmission efficiency still exists.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the application provides a data transmission method, a system and electronic equipment, which are used for at least solving the technical problem of low data transmission efficiency.
According to an aspect of an embodiment of the present application, there is provided a data transmission method. The method may include: acquiring a storage queue written with a data transmission request to a virtual machine, wherein the data transmission request is used for requesting to transmit data to be transmitted to storage equipment; converting a virtual address of a storage queue in a virtual machine into a physical address; mapping the physical address obtained by conversion to a memory address in a host; and writing the data transmission request into a target queue associated with the memory address, wherein the data transmission request is read from the target queue by a storage device, and the storage device is used for responding to the data transmission request and transmitting data to be transmitted.
According to another aspect of the embodiment of the application, another data transmission method is also provided. The method may include: writing a data transmission request of the virtual machine into a storage queue, wherein the data transmission request is used for requesting to store data to be transmitted into a storage device; determining a virtual address of a storage queue in a virtual machine; and transmitting the virtual address to the host, wherein the virtual address is converted into a physical address by the host, the physical address obtained by conversion is mapped to a memory address in the host, and a data transmission request is written into a target queue associated with the memory address, wherein the data transmission request is read from the target queue by a storage device, and the storage device is used for responding to the data transmission request and storing data to be transmitted.
According to another aspect of the embodiment of the application, a data transmission system is also provided. The system may include: the virtual machine is used for writing a data transmission request into the storage queue, wherein the data transmission request is used for requesting to transmit data to be transmitted to the storage equipment; the host is used for converting the virtual address of the storage queue in the virtual machine into a physical address, mapping the physical address obtained by conversion to a memory address in the host, and writing the data transmission request into a target queue associated with the memory address; and the storage device is used for reading the data transmission request from the target queue and transmitting data to be transmitted in response to the data transmission request.
According to another aspect of an embodiment of the present application, there is also provided an electronic device, which may include a memory and a processor: the memory is configured to store computer executable instructions, and the processor is configured to execute the computer executable instructions, wherein the computer executable instructions, when executed by the processor, implement the method for transmitting data according to any one of the above.
According to another aspect of the embodiment of the present application, there is also provided a processor, configured to execute a program, where the method for transmitting data according to any one of the above is performed when the program is running.
According to another aspect of the embodiments of the present application, there is also provided a computer readable storage medium, including a stored program, where the program when run controls a device in which the storage medium is located to perform a data transmission method of any one of the above.
In the embodiment of the application, a storage queue which writes a data transmission request into a virtual machine is obtained, wherein the data transmission request is used for requesting a storage device to transmit data to be transmitted; converting a virtual address of a storage queue in a virtual machine into a physical address; mapping the physical address obtained by conversion to a memory address in a host; and writing the data transmission request into a target queue associated with the memory address, wherein the data transmission request is read from the target queue by a storage device, and the storage device is used for responding to the data transmission request and transmitting data to be transmitted. That is, in the embodiment of the present application, when a virtual machine needs to transmit data, it may detect that the virtual machine writes a storage queue of a data transmission request corresponding to the data to be transmitted, convert a virtual address of the storage queue on the virtual machine into a physical address, then map the physical address to a memory address in a host, write the data transmission request into a target queue associated with the memory address, and may read the data transmission request of the data to be transmitted from the target queue, and transmit the corresponding data according to the data transmission request, thereby achieving the purpose of accelerating without using an additional central processor or hardware, further achieving the technical effect of improving the efficiency of data transmission, and solving the technical problem of low efficiency of data transmission.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application, as claimed.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a computer terminal (or mobile device) for implementing a data transmission method according to an embodiment of the present application;
FIG. 2 is a block diagram of a computing environment for a method of data transmission in accordance with an embodiment of the present application;
FIG. 3 is a flow chart of a method of transmitting data according to an embodiment of the application;
FIG. 4 is a flow chart of another method of transmitting data according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a data transmission system according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a data transmission system according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a memory queue pass-through principle of a nonvolatile memory high-speed protocol according to an embodiment of the present application;
FIG. 8 is a schematic diagram illustrating the allocation of logical block addresses of a nonvolatile memory high speed protocol device according to an embodiment of the present application;
fig. 9 is a schematic diagram of a data transmission device according to an embodiment of the present application;
FIG. 10 is a schematic diagram of another data transmission device according to an embodiment of the present application;
fig. 11 is a block diagram of a computer terminal according to an embodiment of the present application;
fig. 12 is a block diagram of an electronic device of a data transmission method according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, partial terms or terminology appearing in the course of describing embodiments of the application are applicable to the following explanation:
an input/output memory management unit (Input Output Memory Management Unit, abbreviated as IOMMU), a management unit for accessing the host memory by the read/write device of the memory, and authority and address mapping of accessing the host memory by the read/write device of the memory can be controlled (Direct Memory Access, abbreviated as DMA);
direct memory access, namely that the data copying of the device and the memory of the host is initiated by the device, and the memory of the host can be directly accessed without passing through a CPU of the host;
a logical block address (Logical Block Address, abbreviated as LBA) that can be used to describe a passing mechanism of a block where data on a computer storage device is located, and can refer to a location of a certain data block or a data block to which a certain location points;
a Completion Queue (CQ) which is a circular buffer of a fixed size that can be used to issue the status of the completed command;
the virtualization technology (Virtual Function Input Output, abbreviated as VFIO) for memory read-write is a user mode driving (userspace driving) scheme, and can safely present the capabilities of device read-write, interrupt, DMA and the like to a user space.
Example 1
According to an embodiment of the present application, there is provided a data transmission method, it being noted that the steps shown in the flowcharts of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in an order different from that herein.
The method embodiment provided in embodiment 1 of the present application may be executed in a mobile terminal, a computer terminal or a similar computing device. Fig. 1 is a block diagram of a hardware structure of a computer terminal (or mobile device) for implementing a data transmission method according to an embodiment of the present application, as shown in fig. 1, a computer terminal 10 (or mobile device) may include one or more processors 102 (shown in the figures as 102a, 102b, … …,102 n) (the processor 102 may include, but is not limited to, a microprocessor (Microcontroller Unit, abbreviated as MCU) or a programmable logic device (Field Programmable Gate Array, abbreviated as FPGA) or the like, a memory 104 for storing data, and a transmission module 106 for a communication function. In addition, the method may further include: a display, an input/output interface (I/O interface), a universal serial BUS (Universal Serial Bus, simply USB) port (which may be included as one of the ports of the BUS), a network interface, a power supply, and/or a camera. It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 1 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The hardware block diagram shown in fig. 1 may be used not only as an exemplary block diagram of the computer terminal 10 (or mobile device) described above, but also as an exemplary block diagram of the server described above, and in an alternative embodiment, fig. 2 shows, in block diagram form, one embodiment of using the computer terminal 10 (or mobile device) shown in fig. 1 described above as a computing node in a computing environment 201. Fig. 2 is a block diagram of a computing environment for a method of transferring data according to an embodiment of the present application, as shown in fig. 2, the computing environment 201 includes a plurality of computing nodes (e.g., servers) running on a distributed network (shown as 210-1, 210-2, …). The computing nodes each contain local processing and memory resources and end user 202 may run applications or store data remotely in computing environment 201. An application may be provided as a plurality of services 220-1,220-2,220-3 and 220-4 in computing environment 201, representing services "A", "D", "E", and "H", respectively.
End user 202 may provide and access services through a web browser or other software application on a client, in some embodiments, provisioning and/or requests of end user 202 may be provided to portal gateway 230. Ingress gateway 230 may include a corresponding agent to handle provisioning and/or request for services (one or more services provided in computing environment 201).
Services are provided or deployed in accordance with various virtualization techniques supported by the computing environment 201. In some embodiments, services may be provided according to Virtual Machine (VM) based virtualization, container based virtualization, and/or the like. Virtual machine-based virtualization may be the emulation of a real computer by initializing a virtual machine, executing programs and applications without directly touching any real hardware resources. While the virtual machine virtualizes the machine, according to container-based virtualization, a container may be started to virtualize the entire Operating System (OS) so that multiple workloads may run on a single Operating System instance.
In one embodiment based on container virtualization, several containers of a service may be assembled into one Pod (e.g., kubernetes Pod). For example, as shown in FIG. 2, the service 220-2 may be equipped with one or more Pods 240-1,240-2, …,240-N (collectively referred to as Pods). The Pod may include an agent 245 and one or more containers 242-1,242-2, …,242-M (collectively referred to as containers). One or more containers in the Pod handle requests related to one or more corresponding functions of the service, and the agent 245 generally controls network functions related to the service, such as routing, load balancing, etc. Other services may also be equipped with Pod similar to Pod.
In operation, executing a user request from end user 202 may require invoking one or more services in computing environment 201, and executing one or more functions of one service may require invoking one or more functions of another service. As shown in FIG. 2, service "A"220-1 receives a user request of end user 202 from ingress gateway 230, service "A"220-1 may invoke service "D"220-2, and service "D"220-2 may request service "E"220-3 to perform one or more functions.
The computing environment may be a cloud computing environment, and the allocation of resources is managed by a cloud service provider, allowing the development of functions without considering the implementation, adjustment or expansion of the server. The computing environment allows developers to execute code that responds to events without building or maintaining a complex infrastructure. Instead of expanding a single hardware device to handle the potential load, the service may be partitioned to a set of functions that can be automatically scaled independently.
In the above-described operating environment, the present application provides a data transmission method as shown in fig. 3. It should be noted that, the data transmission method of this embodiment may be performed by the host in the embodiment shown in fig. 1. Fig. 3 is a flowchart of a data transmission method according to an embodiment of the present application, and as shown in fig. 3, the method may include the steps of:
In step S302, a storage queue in which a data transmission request is written to the virtual machine is obtained, where the data transmission request is used to request transmission of data to be transmitted to a storage device.
In the technical solution provided in the above step S302 of the present application, if the data to be transmitted needs to be transmitted, a data transmission request may be submitted to a storage queue in which a data transmission request is written in a virtual machine, and the storage queue may be obtained, where the data transmission request may be used to request to a storage device for transmitting corresponding data to be transmitted, may include reading data and writing data, and may be a read-write request of an Input/Output port (I/O) or an I/O request. The virtual machine may include a virtual machine nonvolatile memory high speed protocol (Non Volatile Memory Express, abbreviated as NVMe) driver, and may also be referred to as a custom virtual machine NVMe driver or a virtual NVMe device. The virtual machine NVMe driver may include a controller memory buffer (Controller Memory Buffer, abbreviated as CMB) of the virtual NVMe device, where the memory buffer is a general memory in the NVMe device, and may also be referred to as a read-write memory buffer, where the memory buffer may be used to store a memory queue corresponding to the data transmission request. The store Queue may be a circular buffer of a fixed size that may be used to submit data transfer requests to the NVMe controller, and may also be referred to as a commit Queue (SQ), an input-output commit Queue (I/O SQ), or a command request Queue.
Optionally, in this embodiment, a storage queue (I/O queue) may be created on the host through a nonvolatile memory high-speed protocol driver (host NVMe driver) in the host, and the storage queue on the host may be mapped into the virtual machine through a controller memory buffer area in the virtual machine, so as to achieve the purpose of direct access of the I/O queue, and further achieve the technical effect of directly accessing the target queue of the host through the virtual machine. Optionally, in this embodiment, the corresponding storage queue in the virtual machine is directly created through the storage queue of the read-write storage buffer provided by the host NVMe driver, so as to achieve a pass-through effect between the storage queues of the NVMe.
Optionally, if there is a need to transmit the data to be transmitted, the embodiment may monitor, in real time, a data transmission request corresponding to the data to be transmitted submitted to a storage queue in a memory buffer of the controller by an NVMe driver in the virtual machine through a monitor (hypervisor) of the virtual machine in the host, so as to obtain the storage queue in which the data transmission request is written to the virtual machine.
Step S304, the virtual address of the storage queue in the virtual machine is converted into a physical address.
In the technical solution provided in the above step S304 of the present application, after the data transmission request is obtained and written into the storage queue, the virtual address of the storage queue in the virtual machine may be converted into the physical address corresponding to the virtual address.
Optionally, after the NVMe in the virtual machine drives to submit the data transmission request to the submission queue in the memory buffer, the virtual address of the storage queue written by the data transmission request in the virtual machine may be transmitted to the host, and the virtual address (Guest Virtual Address, abbreviated as GVA) of the virtual machine corresponding to the storage queue may be converted into the physical address (Guest Physical Address, abbreviated as GPA) of the corresponding virtual machine according to the memory management unit in hardware, which may be a virtual logical block address or may be referred to as a virtual LBA, by driving the memory management unit (Input Output Memory Management Unit, abbreviated as IOMMU) in the host. The physical address of the virtual machine may be a physical logical block address, which may also be referred to as a physical LBA.
For example, this embodiment may translate virtual addresses to physical addresses in the corresponding virtual machines by adding an offset to the virtual addresses of the store queue. It should be noted that, the manner of converting the virtual address into the physical address is merely illustrative, and the method and the process of the conversion are not particularly limited.
In step S306, the converted physical address is mapped to the memory address in the host.
In the technical solution provided in the above step S306 of the present application, after converting the virtual address to obtain the physical address in the corresponding virtual machine, the physical address obtained by converting may be mapped to the corresponding memory address in the host, where the host may include a host nonvolatile memory high-speed protocol driver (host NVMe driver). The host NVMe driver may include a page fault handler therein and a storage queue in the host. The store queue may also be referred to as an NVMe queue or an I/O queue. A page fault handler (Page Fault Handler, abbreviated PFH) may be used to handle situations where there is no mapped memory address to a physical address. The memory address in the host may be the physical address of the host (Host Physical Address, abbreviated as HPA).
Optionally, after converting the virtual address into the physical address, the physical address of the virtual machine may be taken over from the memory management unit in hardware through an extended page table (Extended Page Tables, abbreviated as EPT), and the memory address of the host corresponding to the physical address of the virtual machine of the storage queue may be searched in the extended page table, so as to determine whether the corresponding memory address may be searched. If the corresponding memory address cannot be found, the situation can be processed by a page fault processing program in the host. If the corresponding memory address can be found, the physical address of the virtual machine can be converted into the memory address in the corresponding host according to the page table entry in the extended page table of the found memory address.
Optionally, the embodiment provides a complete NVMe device for the virtual machine based on a mapping relationship between the host and the virtual machine, simulates control resources through software, shares control resources of the host with NVMe devices in the virtual machine, combines the control resources with data resources allocated by NVMe drivers in the host, can map to a corresponding host through the mapping relationship when the virtual machine accesses the control resources of the virtual NVMe device, for example, can simulate control resources in a hypervisor of the host through a traditional trap or simulation method, and can cause a virtual machine Exit (Virtual Machine Exit, abbreviated as VM Exit) event and trap onto the host when the virtual machine accesses the control resources of the virtual NVMe device.
In the embodiment of the application, the data transmission request of the virtual machine can be monitored by a monitor (hypervisor) of the virtual machine, when the data transmission request is monitored, the corresponding virtual address of the storage queue written in the data transmission request in the virtual machine can be converted to obtain a physical address, and the converted physical address is mapped into a host to obtain a memory address in the corresponding host. The storage queues of the data transmission requests in the virtual machine can be mapped onto the host through the direct connection technology between the storage queues of the NVMe equipment of the virtual machine and the host, so that the purpose of processing and accelerating without an additional central processing unit or hardware is achieved, the consumption of computing resources is reduced, and the technical effect of improving the efficiency of data transmission is achieved.
Optionally, before data storage, page table entries corresponding to the storage queue and the doorbell register may be built in advance in the extended page table, after the new extended page table is built, the technical effect of accessing the storage queue of the NVMe device like accessing the host on the virtual machine may be achieved, and also the real user data location in the host may be determined in advance, for example, a physical area page (Physical Region Page, abbreviated as PRP) or a Scatter/Gather List (Scatter/Gather List), which may be referred to as PRP/SGL, i.e., after the virtual machine issues an I/O request, the real user data location may be transferred to the virtual machine through NVMe, which may also be referred to as DMA operation, so that the purpose of performing queue pass-through from the storage queue to the target queue may be achieved.
In step S308, the data transmission request is written into the target queue associated with the memory address, where the data transmission request is read from the target queue by the storage device, and the storage device is configured to respond to the data transmission request and transmit the data to be transmitted.
In the technical solution provided in step S308 of the present application, after mapping the physical address of the virtual machine to the memory address in the host, the data transmission request may be written into the target queue associated with the memory address, the data to be transmitted, which is to be transmitted and stored in the data transmission request, may be transmitted from the storage device by reading the written data transmission request from the target queue, and after the data to be transmitted is transmitted, the command or the data may be written into the target queue by the NVMe device, that is, when the data to be transmitted is completed, it may be stated that the NVMe device completes the current I/O request, the state of the current target queue may be updated to the state of the completion queue (Completion Queues), where the data transmission request may be read from the target queue by the storage device. The target queue may be a store queue in the host corresponding to a store queue in the virtual machine, which may also be referred to as an I/O SQ or I/O queue. The storage device may be configured to respond to the data transmission request, and transmit data to be transmitted corresponding to the data transmission request, which may be an NVMe device in the host.
Optionally, after mapping the physical address to a memory address in the host, a data transfer request may be submitted to a target queue in the host. After the data transmission request is submitted to the target queue in the host, the NVMe device in the host can be controlled to acquire the data transmission request, and the transmission process of the data to be transmitted corresponding to the data transmission request is executed.
Alternatively, the NVMe device may write a command or data into the target queue, that is, after completing the corresponding data transmission based on the data transmission request, it may be stated that after the NVMe device completes the current I/O request, the state of the current target queue may be updated to the completion queue state.
Through the steps S302 to S308 in the embodiment of the present application, when the virtual machine needs to transmit data, the storage queue of the data transmission request corresponding to the data to be transmitted written by the virtual machine can be detected, the virtual address of the storage queue on the virtual machine is converted into the physical address, then the physical address is mapped into the memory address in the host, the data transmission request is written into the target queue associated with the memory address, the data transmission request of the data to be transmitted can be read from the target queue, and the corresponding data is transmitted according to the data transmission request, thereby achieving the purpose of accelerating without using an additional central processing unit or hardware, further achieving the technical effect of improving the efficiency of data transmission, and solving the technical problem of low efficiency of data transmission.
The above-described method of this embodiment is further described below.
As an optional implementation manner, step S306 maps the converted physical address to a memory address in the host, including: and determining the memory address with the target mapping relation with the physical address obtained by conversion in a mapping relation model, wherein the mapping relation model is used for representing the mapping relation between different physical addresses and corresponding memory addresses.
In this embodiment, a memory address in a host having a target mapping relationship with the physical address obtained by conversion may be determined in a mapping relationship model, where the mapping relationship model may be used to characterize a mapping relationship between different physical addresses and corresponding memory addresses, and may be an extended page table, where the extended page table may include a plurality of page tables, where each page table corresponds to a page table entry, and may be used to store one memory address or physical address.
Optionally, a mapping relationship between different physical addresses and memory addresses may be established in the mapping relationship model in advance, after the mapping relationship between the two is established, if it is monitored that a data transmission request is written into a storage queue in the virtual machine, the virtual address written into the storage queue of the data transmission request may be converted into a physical address of the corresponding virtual machine, whether a memory address having a mapping relationship with the physical address exists is queried from the mapping relationship model, and if a memory address corresponding to the physical address exists, the physical address may be mapped onto a corresponding memory address in the host according to the memory address corresponding to the physical address queried from the mapping relationship model.
Optionally, in this embodiment, after the virtual address in the storage queue of the virtual machine is converted into the physical address of the virtual machine by the memory management unit, the physical address of the virtual machine may be taken over from the memory management unit by using the extended page table, and the memory address having a mapping relationship with the taken-over physical address may be searched from all page tables of the extended page table. If the corresponding memory address can be found, the corresponding physical address can be converted according to the page table entry of the page table where the memory address in the host is located, so as to obtain the memory address in the corresponding host.
As an optional implementation manner, in response to the mapping relation model not including the memory address corresponding to the physical address obtained by conversion, the target program is called to establish a target mapping relation between the physical address and the memory address, and the target mapping relation is added into the mapping relation model.
In this embodiment, whether a memory address having a mapping relationship with a physical address exists may be queried from a mapping relationship model based on the physical address, so that whether the mapping relationship model includes the memory address corresponding to the physical address may be determined, if the mapping relationship model does not include the memory address corresponding to the physical address obtained by conversion, a target program may be called to establish a target mapping relationship between the physical address and the memory address, and the established target mapping relationship is added to the mapping relationship model for storage, where the target program may be a page fault handling program, and may be used to solve a case where the mapping relationship model does not include the memory address corresponding to the physical address, and the target program may be stored in an NVMe driver in a host.
Optionally, if there is no memory address in the mapping relationship with a certain physical address, it may be indicated that the physical address is not mapped to a memory address in any host, so that a page fault interrupt may be generated. When the situation occurs, the physical address where the corresponding memory address does not exist can be obtained through the page fault processing program stored in the NVMe driver in the host, the mapping from the physical address to the memory address (GPA- > HPA) can be established in the expansion page table, after the mapping between the two addresses is completed, the mapping from the memory address to the physical address (HPA- > GPA) can be returned to be established in the expansion page table, so that the establishment and the addition of the mapping relation between the physical address and the corresponding memory address are completed under the condition that the memory address corresponding to the physical address is not included in the expansion page table, and the problem that the corresponding memory address is not queried from the expansion page table by the physical address during the subsequent data transmission is avoided.
In the embodiment of the application, the situation that the memory address corresponding to the physical address is not queried in the expansion page table can be processed through the page fault processing program, the page fault processing program can be called once by each physical address which does not have the corresponding memory address in the expansion page, the physical address with the abnormal situation can be extracted through the page fault processing program, and the mapping relation between the physical address and the corresponding memory address is established and stored in the expansion page table. After the page fault processing program establishes an extended page table for the physical address with the abnormal condition and the corresponding memory address, the data transmission request on the storage queue of the physical address can not generate the problem of page fault interruption, and the read-write performance of the data is not influenced.
As an optional implementation manner, step S306 maps the converted physical address to a memory address in the host, including: mapping the converted physical address to a memory address in the host by using an emulation resource of the virtual storage device, wherein the emulation resource is used for emulating a processing resource of the storage device.
In this embodiment, the converted physical address may be mapped to a memory address in the host using an emulated resource in the virtual storage device, where the virtual storage device may be a virtual NVMe device, that is, an NVMe virtual device in a monitor of a virtual machine in the host, where the NVMe virtual device may include multiple virtual devices. Each virtual device in the host corresponds to a virtual machine. The simulation resources may include control resources, data resources, and the like, which may be used to simulate processing resources of the storage device. The control resources may include resources such as external component interconnect bus (Peripheral Component Interconnect Express, abbreviated PCIe), base address register (Base Address Register, abbreviated BAR), and management queues. The data resources may include I/O queues, doorbell registers, interrupt resources, LBAs, and the like. The base address register may also be referred to as the BAR space. It should be noted that the content contained in the data resource and the control resource is only illustrative and not particularly limited.
In the embodiment of the application, the equipment resources of the NVMe equipment can be divided into control resources and data resources, the control resources can be simulated by software, and the control resources are combined with the data resources allocated by the NVMe drive of the host, because the control resources are critical to the NVMe equipment, but the NVMe equipment only has one control resource, but the NVMe equipment on each virtual machine needs to have independent control resources, and the control resources in the host can be virtualized, so that the virtual machine can be shared, namely, the simulation resources provided for the virtual storage equipment can be formed. The virtual machine can map the physical address to the memory address in the host through the simulation resource, thereby achieving the purpose of providing the complete NVMe equipment for the virtual machine in the host management program of the host. Because no additional central processing unit is needed for processing or hardware is not needed for accelerating, the technical problem of low data transmission efficiency is solved.
Alternatively, control resources in the NVMe device of the host may be emulated by a trap or an emulation method, and when the virtual machine accesses the control resources of the virtual NVMe device, a VM Exit event may be caused and trapped to the host, and then a data transmission request in the virtual machine may be taken over by a monitor of the virtual machine. The method comprises the steps that a management program reads or updates a related virtual register, a physical address of a storage queue corresponding to a data transmission request is mapped to a memory address in a host, data to be transmitted corresponding to the data transmission request is transmitted, after the transmission is completed, an interrupt can be generated to the virtual machine through the management program to inform that the request is completed, and normal I/O reading and writing do not involve control resources, so that the I/O performance cannot be influenced by using a trap and a simulation method.
As an alternative embodiment, the method further comprises: determining storage information of analog resources of the virtual storage device in a host in a data transmission request; analog resources stored based on the stored information are determined in the virtual storage device.
In this embodiment, the storage information corresponding to the virtual storage device's analog resource in the host may be determined from the data transmission request, or the storage information stored based on the storage information may be determined from the virtual storage device, where the storage information may also be referred to as LBA information, and may include a network start address (lba_start) of the logical block address and a range (lba_size) of the logical block address.
Alternatively, the internal space of the NVMe device may be isolated by allocating different ranges of logical block addresses to different virtual machines according to the Logical Block Addresses (LBAs) to address the internal space of the NVMe device. Network start addresses and ranges of different logical block addresses are allocated to different virtual devices through an NVMe driver in the host, and the allocated LBA information can be stored in a controller memory buffer of the virtual machine device.
Alternatively, when the virtual machine sends a data transfer request through an instruction (e.g., nvme_sub_cmd), a network start address (lba_start) of a logical block address may be added to the slba in the data transfer request through an NVMe driver in the virtual machine, and it may be checked whether the logical block address is within an accessible range of the virtual machine.
In the embodiment of the application, the complete NVMe equipment can be provided for the virtual machine through the control resource and the data resource, so that the aim of data storage can be realized by adopting an NVMe equipment virtualization method, and the technical effect of improving the data transmission efficiency through the NVMe equipment virtualization can be ensured.
As an alternative embodiment, writing the data transfer request to the target queue associated with the memory address includes: and writing the data transmission request into a target queue associated with the memory address in response to the storage space in the storage information being within the range of the storage space accessible to the virtual machine.
In this embodiment, it may be determined whether the storage space in the storage information is within the range of the accessible storage space of the virtual machine, and if so, the data transfer request may be written into the target queue associated with the memory address, where the storage space may be a logical block address.
Alternatively, it may be checked whether the logical block address is within the accessible range of the virtual machine, and if so, it may be indicated that the data access request is legal, and the data access request may be sent and read and written normally. If the data access request is not within the accessible range, it can be stated that the data access request cannot be sent due to a read-write error that generates the data access request.
As an alternative embodiment, the method may further comprise: a target queue is created and mapped to a virtual machine.
In this embodiment, a target queue may be created and mapped into the virtual machine.
Optionally, the I/O queue may be created through a host NVMe driver, and mapped into the virtual machine through a control memory buffer in the virtual machine, so that the purpose of the virtual machine directly accessing the I/O queue in the host may be achieved.
In the embodiment of the application, the direct access to the memory of the host by the virtual machine can be realized through direct memory access Remapping (Remapping) in the host without processing by a central processing unit of the host, and the Interrupt processing of the virtual machine can be realized through an Interrupt Post (Interrupt Post) component in the host without participation of a management program, so that the technical effect of improving the efficiency of data transmission is realized.
As an alternative embodiment, the method may further comprise: in response to a virtual machine boot up, direct memory access remapping information is configured in the host, wherein the direct memory access remapping information is used to cause the storage device to request from the host, without the processor of the host, a transfer of data from the virtual machine.
In this embodiment, the direct memory access remapping information may be configured in the host after the virtual machine is booted, wherein the direct memory access remapping information may be used to enable the storage device to directly request transfer of data from the virtual machine to the host without passing through the processor of the host. The processor may be a central processor.
Optionally, after the virtual machine is started by the monitor of the virtual machine, DMA remapping and interrupt forwarding may be configured in the host NVMe driver, and efficient DMA address translation and interrupt handling may also be provided to the virtual machine.
Alternatively, the host NVMe driver may be responsible for starting functions such as directly accessing the I/O queue, DMA remapping, and interrupt posts by the virtual machine, in addition to providing standard NVMe driver functions.
In the embodiment of the application, the I/O queue can be created through the NVMe drive of the host, and the buffer area of the I/O queue can be mapped onto the virtual machine through the CMB of the virtual device, so that the virtual machine can directly access the I/O queue, and the functions of directly accessing the memory of the host, interrupting the virtual machine and the like without passing through the CPU of the host can be realized through DMA, direct memory access remapping, interrupt binding posts and the like, and the technical effect of improving the efficiency of data transmission can be realized without participation of a management program and the like.
The embodiment of the application also provides another data transmission method shown in fig. 4. It should be noted that, the data transmission method of this embodiment may be performed by the virtual machine of the embodiment. Fig. 4 is a flowchart of another data transmission method according to an embodiment of the present application, and as shown in fig. 4, the method may include the steps of:
step S402, writing a data transmission request of the virtual machine into a storage queue, wherein the data transmission request is used for requesting to transmit data to be transmitted to a storage device.
In the technical solution provided in the above step S402 of the present application, a data transmission request may be written into a storage queue in a virtual machine, where the data transmission request may be used to request a storage device to transmit corresponding data to be transmitted.
Step S404, determining the virtual address of the storage queue in the virtual machine.
In the technical solution provided in the above step S404 of the present application, the virtual address of the storage queue for writing the data transmission request in the virtual machine may be determined.
In step S406, the virtual address is transferred to the host, where the virtual address is converted into a physical address by the host, the physical address obtained by conversion is mapped to a memory address in the host, and a data transfer request is written into a target queue associated with the memory address, where the data transfer request is read from the target queue by a storage device, and the storage device is configured to transfer data to be transferred in response to the data transfer request.
In the technical solution provided in the above step S406 of the present application, the virtual address may be transmitted to the host, the virtual address may be converted into a physical address in the corresponding virtual machine by the host, and the physical address obtained by the conversion may be mapped to a memory address in the host, so that a data transmission request may be written into a target queue associated with the memory address, where the data transmission request may be read from the target queue by the storage device, and the storage device may be configured to respond to the data transmission request and transmit the corresponding data to be transmitted.
Optionally, after determining the virtual address of the storage queue in the virtual machine, the virtual address may be transmitted to the host, where the virtual address is converted by the memory management unit in the host to obtain a corresponding physical address, and the physical address may be mapped to the memory address in the corresponding host based on the memory address having a mapping relationship with the physical address and being queried from the extended page table, and the data transmission request is written into the target queue associated with the corresponding memory address, so that based on the data transmission request, corresponding data to be transmitted is transmitted.
The method may further comprise: a store queue is created based on the target queue.
Alternatively, a corresponding store queue may be created from the target queue.
In the embodiment of the application, the target queue on the host can be mapped into the virtual machine through the virtual machine driver, so that the corresponding storage queue in the virtual machine can be created based on the target queue of the read-write storage buffer zone in the host NVMe driver, the aim of direct connection between the target queue and the storage queue, namely direct connection of the I/O queue, is realized, and the technical effect that the target queue of the host can be directly accessed through the virtual machine is achieved.
Through the steps S402 to S406 of the present application, a data transmission request of the virtual machine is written into a storage queue, where the data transmission request is used to request to store data to be transmitted into a storage device; determining a virtual address of a storage queue in a virtual machine; the virtual address is transmitted to the host, wherein the virtual address is converted into a physical address by the host, the physical address obtained by conversion is mapped to a memory address in the host, and a data transmission request is written into a target queue associated with the memory address, wherein the data transmission request is read from the target queue by a storage device, and the storage device is used for responding to the data transmission request and storing data to be transmitted, so that the aim of accelerating without adopting an additional central processing unit or hardware is achieved, the technical effect of improving the efficiency of data transmission is achieved, and the technical problem of low efficiency of data transmission is solved.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
Example 2
According to an embodiment of the present application, there is further provided an embodiment of a data transmission system, and fig. 5 is a schematic diagram of a data transmission system according to an embodiment of the present application, where, as shown in fig. 5, the data transmission system may include: virtual machine 501, host 502, and storage device 503.
The virtual machine 501 is configured to write a data transmission request to the storage queue, where the data transmission request is used to request transmission of data to be transmitted to the storage device.
Alternatively, the virtual machine may include: a virtual drive that may be used to perform at least one of: adding the storage information of the simulation resources of the virtual storage equipment in the host to a data transmission request; initializing a virtual storage device; a store queue is created based on the target queue.
Optionally, the virtual machine 501 may include: when the data transmission request is written into the storage queue, the virtual machine driver can add the storage information corresponding to the simulation resources in the virtual storage device in the host computer into the data transmission request through the virtual machine driver. The virtual storage device may also be initialized, so that an NVMe device corresponding to the host may be provided to the virtual machine.
Optionally, the target queue on the host may be mapped into the virtual machine by the virtual machine driver, so that a corresponding storage queue in the virtual machine may be created based on the target queue of the read-write storage buffer in the host NVMe driver, so that the purpose of direct connection between the target queue and the storage queue, that is, direct connection between the I/O queue, is achieved, and the technical effect that the target queue of the host may be directly accessed by the virtual machine is achieved.
Alternatively, when there is a data transfer request write in the storage queue in the virtual machine 501, the virtual address of the storage queue in the virtual machine may be determined and sent to the host.
The host 502 is configured to convert a virtual address of a storage queue in a virtual machine into a physical address, map the physical address obtained by conversion to a memory address in the host, and write a data transmission request into a target queue associated with the memory address.
Alternatively, the host 502 may include: a virtual storage device created for a monitor by a virtual machine and configured to provide a simulated resource to map a physical address to a memory address in a host, wherein the simulated resource is configured to simulate a processing resource of the storage device.
Optionally, after the initialization of the storage device in the host is completed, the virtual storage device in the monitor for the virtual machine may be created or destroyed at any time through the monitor of the virtual machine, or may be temporarily created according to actual requirements, or may be created or destroyed immediately after the initialization of the storage device, where the monitor of the virtual machine may be a monitor (hypervisor) of the virtual machine in the host. It should be noted that the above-mentioned scenario and situation of destroying the creation of the virtual storage device by the monitor of the virtual machine is only illustrative, and is not limited herein.
Optionally, the host 502 may further include: the memory manager may be configured to translate the virtual address into a physical address, where the memory manager may be a memory management unit.
Optionally, after the host receives the virtual address sent by the virtual machine, the received virtual address may be converted into a corresponding physical address by the memory manager.
Optionally, after converting the virtual address to the physical address, a memory address having a mapping relationship with the received physical address may be determined by the virtual storage device, and an emulation resource may be provided to map the physical address into the memory address in the host, where the emulation resource may be used to emulate a processing resource in the storage device.
The storage device 503 is configured to transmit data to be transmitted in response to a read data transmission request in the target queue.
In this embodiment, a data storage system is provided. Writing a data transmission request into a storage queue through a virtual machine, wherein the data transmission request is used for requesting to transmit data to be transmitted to storage equipment; converting a virtual address of a storage queue in a virtual machine into a physical address through a host, mapping the converted physical address to a memory address in the host, and writing a data transmission request into a target queue associated with the memory address; the data transmission request is read from the target queue through the storage device, and the data to be transmitted is transmitted in response to the data transmission request, so that the aim of accelerating without adopting an additional central processing unit or hardware is fulfilled, the technical effect of improving the efficiency of data transmission is further realized, and the technical problem of low efficiency of data transmission is solved.
It should be noted that, in the present application, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.), for example, the data for verification are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and are provided with corresponding operation entries for the user to select authorization or rejection.
Example 3
Currently, NVMe is a standard interface protocol specified by a solid state disk (Solid State Drive, abbreviated as SSD) facing the high-speed serial computer expansion bus standard, and has become a mainstream in a cloud computing data center. However, the effective sharing of storage resources for the virtualization of NVMe is one of the essential key technologies.
In one embodiment, data may be transferred using a method of storage virtualization, with current storage virtualization being implemented by software emulation or hardware acceleration. However, for the software simulation method, a large amount of central processing unit resources are required to be consumed, and for the hardware acceleration method, additional hardware is required to be added, so that the method has the defect of large time consumption in the calculation and storage processes, and still has the technical problem of low data transmission efficiency.
Further, the application provides an NVMe equipment virtualization method, which solves the technical problem of low data transmission efficiency, and is different from the traditional solution in that after virtual address conversion, NVMe equipment is required to be simulated through a layer of software, so that the problem of processing through a CPU (Central processing Unit) or hardware is avoided, and the technical problem of low data transmission efficiency is solved.
In the embodiment of the application, when the virtual machine needs to transmit data, the storage queue of the data transmission request corresponding to the data to be transmitted written by the virtual machine can be detected, the virtual address of the storage queue on the virtual machine is converted into the physical address, then the physical address is mapped into the memory address in the host, the data transmission request is written into the target queue associated with the memory address, the data transmission request of the data to be transmitted can be read from the target queue, and the corresponding data is transmitted according to the data transmission request, thereby achieving the aim of accelerating without adopting an additional central processing unit or hardware, further realizing the technical effect of improving the efficiency of data transmission, and solving the technical problem of low transmission of data storage.
The above-described method of this embodiment is further described below.
In this embodiment, fig. 6 is a schematic diagram of a data transmission system according to an embodiment of the present application, as shown in fig. 6, the data transmission system may include a plurality of virtual machines, for example, a virtual machine 601 and a virtual machine 603, a host 602 and hardware 604, where the virtual machine 601 may include an application 6011 and a virtual machine NVMe driver 6012, and similarly, the virtual machine 603 may include an application 6031 and a virtual machine NVMe driver 6032. An NVMe queue may be included in both virtual machine NVMe drives, which may include an Admin queue and multiple I/O queues. The host may include an NVMe virtual device 6021, an IOMMU driver 6022, and a host NVMe driver 6023, where the NVMe virtual device 6021 may include a plurality of virtual devices, each virtual machine corresponds to a virtual device, and each virtual device includes a PCle configuration space, a BAR space, and an Admin queue. DMA remapping, interrupt forwarding, and NVMe queues may be included in the host NVMe driver 6023. The NVMe queue may include an Admin queue and a plurality of I/O queues. IOMMU6041 and NVMe device 6042 may be included in hardware 604.
In this embodiment, an I/O queue may be created on a host through a host NVMe driver, and a storage queue on the host may be mapped into a virtual machine through a controller memory buffer in the virtual machine, so as to achieve the purpose of direct communication of the I/O queue, and further achieve the technical effect of directly accessing the storage queue through the virtual machine. The corresponding storage queues in the virtual machine can be directly created through the storage queues of the read-write storage buffer provided by the host NVMe driver, so that the direct-connection effect between the storage queues of the NVMe is realized. If there is a need to transmit the data to be transmitted, the data transmission request corresponding to the data to be transmitted, which is required to be transmitted, is submitted to the storage queue in the memory buffer of the controller by the NVMe driver in the virtual machine through the monitor (hypervisor) of the virtual machine, and is monitored in real time.
In this embodiment, after the NVMe driver in the virtual machine submits the data transmission request to the commit queue in the memory buffer, the virtual address of the storage queue written by the data transmission request in the virtual machine may be transmitted to the host, and the virtual address of the virtual machine corresponding to the storage queue may be converted into the physical address of the corresponding virtual machine according to the memory management unit in the hardware by the memory management unit driver in the host.
For example, the virtual address may be translated to a physical address in the corresponding virtual machine by adding an offset to the virtual address of the store queue. It should be noted that, the manner of converting the virtual address into the physical address is merely illustrative, and the method and the process of the conversion are not particularly limited.
Optionally, fig. 7 is a schematic diagram of a pass-through principle of a storage queue of a nonvolatile memory high-speed protocol according to an embodiment of the present application, as shown in fig. 7, the pass-through principle may take one virtual machine as an example, and may include a virtual machine 701, an extended page table 702 and a host 703, where the virtual machine 701 may include a virtual machine NVMe driver 7011, and the driver 7011 may include I/O queues corresponding to different data transmission requests, for example, an I/O queue 1 and an I/O queue 2. The host 703 may include a host NVMe driver 7031, where the driver 7031 may include a page fault handler and an NVMe queue, where the NVMe queue may include queues such as Admin queue, I/O queue 0, I/O queue 1, and I/O queue n.
Alternatively, taking the case that there is an I/O queue 2 in which the data transmission request is written into the virtual machine as an example, the mapping from the memory address corresponding to the I/O queue 2 to the physical address may be searched from the extended page table 702, and if there is the mapping from the memory address of the queue to the physical address in the extended page table, the physical address of the I/O queue 2 may be directly translated into the corresponding memory address. If the mapping from the memory address to the physical address of the queue does not exist in the extended page table, the situation can be processed through a page fault processing program in a host NVMe (network video Me) driver, namely, the physical address under the condition of page fault is sent to the page fault processing program in the host through the extended page table, the mapping from the physical address to the memory address is established through the page fault processing program, the mapping from the physical address to the memory address is sent to the extended page table for address translation, namely, the mapping from the physical address to the memory address can be translated into the mapping relation from the memory address to the physical address in the extended page table, and the mapping relation from the memory address to the physical address is stored in the extended page table.
In this embodiment, after converting the virtual address into the physical address, the physical address of the virtual machine may be taken over from the memory management unit in the hardware through the expansion page table, and the memory address of the host corresponding to the physical address of the virtual machine in the storage queue may be searched in the expansion page table, so as to determine whether the corresponding memory address may be searched. If the corresponding memory address cannot be found, the situation can be processed by a page fault processing program in the host. If the corresponding memory address can be found, the physical address of the virtual machine can be converted into the memory address in the corresponding host according to the page table entry in the extended page table of the found memory address.
Optionally, before data transmission, a page table entry corresponding to each of the storage queue and the doorbell register may be pre-established in the extended page table, after the page table entries of the storage queue and the doorbell register are established, a technical effect of accessing the storage queue of the NVMe device on the virtual machine like accessing the host machine may be achieved, or a real user data location in the host machine, for example, a PRP/SGL may be pre-determined, that is, after the virtual machine issues an I/O request, a location of the two real user data may be transmitted to the virtual machine through the NVMe, which may also be referred to as a DMA operation, so that a purpose of performing a queue pass-through from the storage queue to the target queue may be achieved.
Optionally, after the memory management unit converts the virtual address in the storage queue of the virtual machine into the physical address of the virtual machine, the physical address of the virtual machine may be taken over from the memory management unit through the extended page table, and the memory address in the host having a mapping relationship with the taken over physical address may be found from all page tables of the extended page table. If the memory address in the corresponding host can be found, the corresponding physical address can be converted according to the page table entry of the page table where the memory address in the host is located, so as to obtain the memory address in the corresponding host.
Optionally, if there is no memory address in the mapping relationship with a certain physical address, it may be indicated that the physical address is not mapped to a memory address in any host, so that a page fault interrupt may be generated. When the situation occurs, the physical address where the corresponding memory address does not exist can be obtained through the page fault processing program stored in the NVMe driver in the host, the mapping of GPA- > HPA can be built in the extended page table, and after the mapping of the two addresses is completed, the mapping of HPA- > GPA can be built again. Therefore, under the condition that the memory address corresponding to the physical address is not included in the expansion page table, the mapping relation between the physical address and the corresponding memory address is built and added, and the problem that the physical address is not queried from the expansion page table to the corresponding memory address during subsequent data transmission is avoided.
Alternatively, control resources in a hypervisor of the host computer may be emulated by a trap or an emulation method, when the virtual machine accesses control resources of the virtual NVMe device, a VM Exit event may be caused and trapped to the host computer, and then a data transfer request in the virtual machine may be taken over by a monitor of the virtual machine. The method comprises the steps that a management program reads or updates a related virtual register, a physical address of a storage queue corresponding to a data transmission request is mapped to a memory address in a host, data to be transmitted corresponding to the data transmission request is transmitted, after the transmission is completed, an interrupt can be generated to the virtual machine through the management program to inform that the request is completed, and normal I/O reading and writing do not involve control resources, so that the I/O performance cannot be influenced by using a trap and a simulation method.
Alternatively, the direct memory access remapping information may be configured in the host after the virtual machine is booted, wherein the direct memory access remapping information may be used to enable the storage device to directly request transfer of data from the virtual machine to the host without passing through the processor of the host. The processor may also be referred to as a central processor.
Optionally, after the virtual machine is started by the monitor of the virtual machine, DMA remapping and interrupt forwarding may be configured in the host NVMe driver, and efficient DMA address translation and interrupt handling may also be provided to the virtual machine.
Alternatively, the host NVMe driver may be responsible for starting functions such as directly accessing the I/O queue, DMA remapping, and interrupt posts by the virtual machine, in addition to providing standard NVMe driver functions.
Optionally, the I/O queue may be created through a host NVMe driver, and the I/O queue buffer may be mapped onto the virtual machine through the CMB of the virtual device, so that the virtual machine may directly access the I/O queue, and may implement functions of directly accessing the memory of the host and performing interrupt processing on the virtual machine without passing through the host CPU through DMA, direct memory access remapping, interrupt binding post, and the like, and may implement a technical effect of improving efficiency of data transmission because no participation of a hypervisor or the like may be required.
In the embodiment of the application, the equipment resources of the NVMe equipment can be divided into control resources and data resources, and the complete NVMe equipment can be provided for the virtual machine in a host management program of the host. The control resources can be simulated by software and combined with the data resources allocated by the NVMe driver of the host, and as the control resources are critical to the NVMe device, the NVMe device only has one control resource, but the NVMe device on each virtual machine needs to have independent control resources, and the control resources in the host can be virtualized to be shared to the virtual machine, that is, can be formed into the simulated resources provided for the virtual storage device. Therefore, the virtual machine can map the physical address to the memory address in the host through the simulation resource, and the technical problem of low data transmission efficiency is solved as no additional central processing unit is needed for processing or hardware acceleration.
In this embodiment, according to the Logical Block Address (LBA) addressing the internal space of the NVMe device, different virtual machines may be allocated different ranges of logical block addresses to isolate the storage space inside the NVMe device. Network start addresses and ranges of different logical block addresses are allocated to different virtual devices through an NVMe driver in the host, and the allocated LBA information can be stored in a controller memory buffer of the virtual machine device.
For example, fig. 8 is a schematic diagram illustrating allocation of logical block addresses of a nonvolatile memory high-speed protocol device according to an embodiment of the present application, as shown in fig. 8, different ranges of logical block addresses may be allocated to Logical Block Addresses (LBAs) of NVMe devices in a host according to different virtual machines, for example, three virtual machines (virtual machine 0, virtual machine 1, and virtual machine 2) may be taken as an example, an NVMe device may be divided into three ranges (lba_size0, lba_size1, and lba_size2), where a network start address of each range corresponds to lba_start0, lba_start1, and lba_start2.lba_start0 corresponds to virtual machine 0, lba_start1 corresponds to virtual machine 1, and lba_start2 corresponds to virtual machine 2.
Alternatively, when the virtual machine sends a data transfer request through an instruction (e.g., nvme_sub_cmd), the network start address of the logical block address may be added to the slba in the data transfer request through an NVMe driver in the virtual machine, and it may be checked whether the logical block address is within the accessible range of the virtual machine.
Alternatively, if the data access request is within the accessible range, it may be indicated that the data access request is correct, and the data access request may be sent and read/written normally. If the data access request is not within the accessible range, it can be stated that the data access request cannot be sent due to a read-write error that generates the data access request.
In this embodiment, after mapping the physical address to the memory address in the host, the data transmission request may be submitted to the target queue in the host, and after the submitting is successful, the NVMe device in the host may be controlled to obtain the data transmission request, and the transmission process of the data to be transmitted corresponding to the data transmission request is executed.
Alternatively, the NVMe device may write a command or data into the target queue, that is, after completing the corresponding data transmission based on the data transmission request, it may be stated that after the NVMe device completes the current I/O request, the state of the current target queue may be updated to the completion queue state.
In the embodiment of the application, when the virtual machine needs to transmit data, the virtual machine can detect the storage queue of the data transmission request corresponding to the data to be transmitted written by the virtual machine, the virtual address of the storage queue on the virtual machine is converted into the physical address, then the physical address is mapped into the memory address in the host, the data transmission request is written into the target queue associated with the memory address, the data transmission request of the data to be transmitted can be read from the target queue, and the corresponding data is transmitted according to the data transmission request, thereby achieving the aim of accelerating without adopting an additional central processing unit or hardware, further realizing the technical effect of improving the efficiency of data transmission, and solving the technical problem of low efficiency of data transmission.
Example 4
According to an embodiment of the present application, there is also provided a data transmission apparatus for implementing the data transmission method shown in fig. 3.
Fig. 9 is a schematic diagram of a data transmission device according to an embodiment of the present application, and as shown in fig. 9, the data transmission device 900 may include: a monitoring unit 902, a conversion unit 904, a mapping unit 906 and a first writing unit 908.
The monitoring unit 902 is configured to obtain a storage queue to which a data transmission request is written to the virtual machine, where the data transmission request is used to request transmission of data to be transmitted to the storage device.
A conversion unit 904, configured to convert a virtual address of the storage queue in the virtual machine into a physical address.
The mapping unit 906 is configured to map the converted physical address to a memory address in the host.
A first writing unit 908, configured to write a data transmission request into a target queue associated with a memory address, where the data transmission request is read from the target queue by a storage device, and the storage device is configured to respond to the data transmission request and transmit data to be transmitted.
Here, the above-described monitoring unit 902, conversion unit 904, mapping unit 906, and first writing unit 908 correspond to steps S302 to S308 in embodiment 1, and the four units are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 1 above. It should be noted that the above-mentioned units may be hardware components or software components stored in a memory (for example, the memory 104) and processed by one or more processors (for example, the processors 102a,102b … …,102 n), or the above-mentioned units may be part of an apparatus and may be run in the computer terminal 10 provided in embodiment 1.
According to an embodiment of the present application, there is also provided a data transmission apparatus for implementing the data transmission method shown in fig. 4.
Fig. 10 is a schematic diagram of another data transmission device according to an embodiment of the present application, and as shown in fig. 10, the data transmission device 1000 may include: a second writing unit 1002, a determining unit 1004, and a transmitting unit 1006.
A second writing unit 1002, configured to write a data transmission request of the virtual machine to the storage queue, where the data transmission request is used to request transmission of data to be transmitted to the storage device.
A determining unit 1004, configured to determine a virtual address of the storage queue in the virtual machine.
The transmitting unit 1006 is configured to transmit a virtual address to the host, where the virtual address is converted into a physical address by the host, the physical address obtained by conversion is mapped to a memory address in the host, and a data transmission request is written into a target queue associated with the memory address, where the data transmission request is read from the target queue by a storage device, and the storage device is configured to transmit data to be transmitted in response to the data transmission request.
Here, it should be noted that the above-described second writing unit 1002, determination unit 1004, and transmission unit 1006 correspond to steps S402 to S406 in embodiment 1, and the three units are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 1 above. It should be noted that the above-mentioned units may be hardware components or software components stored in a memory (for example, the memory 104) and processed by one or more processors (for example, the processors 102a,102b … …,102 n), or the above-mentioned units may be part of an apparatus and may be run in the computer terminal 10 provided in embodiment 1.
In the data transmission device, when the virtual machine needs to transmit data, the virtual machine can detect the storage queue of the data transmission request corresponding to the data to be transmitted written in by the virtual machine, the virtual address of the storage queue on the virtual machine is converted into the physical address, then the physical address is mapped into the memory address in the host, the data transmission request is written in the target queue associated with the memory address, the data transmission request of the data to be transmitted can be read from the target queue, and the corresponding data is transmitted according to the data transmission request, so that the aim of accelerating without adopting an additional central processing unit or hardware is fulfilled, the technical effect of improving the data transmission efficiency is further realized, and the technical problem of low data transmission efficiency is solved.
Example 5
Embodiments of the present application may provide a computer terminal, which may be any one of a group of computer terminals. Alternatively, in the present embodiment, the above-described computer terminal may be replaced with a terminal device such as a mobile terminal.
Alternatively, in this embodiment, the above-mentioned computer terminal may be located in at least one network device among a plurality of network devices of the computer network.
In this embodiment, the above-mentioned computer terminal may execute the program code of the following steps in the data transmission method: acquiring a storage queue in which a data transmission request is written to a virtual machine; converting a virtual address of a storage queue in a virtual machine into a physical address; mapping the physical address obtained by conversion to a memory address in a host; the data transfer request is written to a target queue associated with the memory address.
Alternatively, fig. 11 is a block diagram of a computer terminal according to an embodiment of the present application. As shown in fig. 11, the computer terminal a may include: one or more (only one is shown) processors 1102, a memory 1104, and a transmission 1106.
The memory may be used to store software programs and modules, such as program instructions/modules corresponding to the data transmission method and apparatus in the embodiments of the present application, and the processor executes the software programs and modules stored in the memory, thereby executing various functional applications and data processing, that is, implementing the data transmission method described above. The memory may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory remotely located with respect to the processor, which may be connected to terminal a through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor may call the information and the application program stored in the memory through the transmission device to perform the following steps: acquiring a storage queue in which a data transmission request is written to a virtual machine; converting a virtual address of a storage queue in a virtual machine into a physical address; mapping the physical address obtained by conversion to a memory address in a host; the data transfer request is written to a target queue associated with the memory address.
Optionally, the above processor may further execute program code for: and determining the memory address with the target mapping relation with the physical address obtained by conversion in a mapping relation model, wherein the mapping relation model is used for representing the mapping relation between different physical addresses and corresponding memory addresses.
Optionally, the above processor may further execute program code for: and calling a target program to establish a target mapping relation between the physical address and the memory address in response to the memory address which does not comprise the physical address obtained by conversion in the mapping relation model, and adding the target mapping relation into the mapping relation model.
Optionally, the above processor may further execute program code for: mapping the converted physical address to a memory address in the host by using an emulation resource of the virtual storage device, wherein the emulation resource is used for emulating a processing resource of the storage device.
Optionally, the above processor may further execute program code for: determining storage information of analog resources of the virtual storage device in a host in a data transmission request; analog resources stored based on the stored information are determined in the virtual storage device.
Optionally, the above processor may further execute program code for: and writing the data transmission request into a target queue associated with the memory address in response to the storage space in the storage information being within the range of the storage space accessible to the virtual machine.
Optionally, the above processor may further execute program code for: a target queue is created and mapped to a virtual machine.
Optionally, the above processor may further execute program code for: and in response to the start of the data transmission efficiency virtual machine, configuring direct memory access remapping information in the data transmission efficiency host, wherein the data transmission efficiency direct memory access remapping information is used for enabling the data transmission efficiency storage device to request the data transmission efficiency host to transmit data from the data transmission efficiency virtual machine without a processor of the data transmission efficiency host.
The processor may call the information and the application program stored in the memory through the transmission device to perform the following steps: writing a data transmission request of the virtual machine into a storage queue, wherein the data transmission request is used for requesting to transmit data to be transmitted to storage equipment; determining a virtual address of a storage queue in a virtual machine; and transmitting the virtual address to the host, wherein the virtual address is converted into a physical address by the host, the physical address obtained by conversion is mapped to a memory address in the host, and a data transmission request is written into a target queue associated with the memory address, wherein the data transmission request is read from the target queue by a storage device, and the storage device is used for responding to the data transmission request and transmitting data to be transmitted.
Optionally, the above processor may further execute program code for: a store queue is created based on the target queue.
By adopting the embodiment of the application, a data transmission method is provided. In the embodiment of the application, when the virtual machine needs to transmit data, the virtual machine can detect the storage queue of the data transmission request corresponding to the data to be transmitted written by the virtual machine, the virtual address of the storage queue on the virtual machine is converted into the physical address, then the physical address is mapped into the memory address in the host, the data transmission request is written into the target queue associated with the memory address, the data transmission request of the data to be stored can be read from the target queue, and the corresponding data is transmitted according to the data transmission request, thereby achieving the aim of accelerating without adopting an additional central processing unit or hardware, further realizing the technical effect of improving the efficiency of data transmission, and solving the technical problem of low efficiency of data transmission.
It will be understood by those skilled in the art that the structure shown in fig. 11 is only schematic, and the computer terminal a may also be a terminal device such as a smart phone (e.g. an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, and a mobile internet device (Mobile Internet Devices, abbreviated as MID), a PAD, etc. Fig. 11 does not limit the structure of the computer terminal a. For example, the computer terminal a may also include more or fewer components (such as a network interface, a display device, etc.) than shown in fig. 11, or have a different configuration than shown in fig. 11.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program for instructing a terminal device to execute in association with hardware, the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
Example 6
Embodiments of the present application also provide a computer-readable storage medium. Alternatively, in this embodiment, the computer readable storage medium may be used to store the program code executed by the data transmission method provided in the first embodiment.
Alternatively, in this embodiment, the above-mentioned computer-readable storage medium may be located in any one of the computer terminals in the computer terminal group in the computer network, or in any one of the mobile terminals in the mobile terminal group.
Optionally, in the present embodiment, the computer readable storage medium is configured to store program code for performing the steps of: acquiring a storage queue in which a data transmission request is written to a virtual machine; converting a virtual address of a storage queue in a virtual machine into a physical address; mapping the physical address obtained by conversion to a memory address in a host; the data transfer request is written to a target queue associated with the memory address.
Optionally, the above computer readable storage medium may further execute program code for: and determining the memory address with the target mapping relation with the physical address obtained by conversion in a mapping relation model, wherein the mapping relation model is used for representing the mapping relation between different physical addresses and corresponding memory addresses.
Optionally, the above computer readable storage medium may further execute program code for: and calling a target program to establish a target mapping relation between the physical address and the memory address in response to the memory address which does not comprise the physical address obtained by conversion in the mapping relation model, and adding the target mapping relation into the mapping relation model.
Optionally, the above computer readable storage medium may further execute program code for: mapping the converted physical address to a memory address in the host by using an emulation resource of the virtual storage device, wherein the emulation resource is used for emulating a processing resource of the storage device.
Optionally, the above computer readable storage medium may further execute program code for: determining storage information of analog resources of the virtual storage device in a host in a data transmission request; analog resources stored based on the stored information are determined in the virtual storage device.
Optionally, the above computer readable storage medium may further execute program code for: and writing the data transmission request into a target queue associated with the memory address in response to the storage space in the storage information being within the range of the storage space accessible to the virtual machine.
Optionally, the above computer readable storage medium may further execute program code for: a target queue is created and mapped to a virtual machine.
Optionally, the above computer readable storage medium may further execute program code for: in response to a virtual machine boot up, direct memory access remapping information is configured in the host, wherein the direct memory access remapping information is used to cause the storage device to request from the host, without the processor of the host, a transfer of data from the virtual machine.
As an alternative example, the computer readable storage medium is arranged to store program code for performing the steps of: writing a data transmission request of the virtual machine into a storage queue, wherein the data transmission request is used for requesting to transmit data to be transmitted to storage equipment; determining a virtual address of a storage queue in a virtual machine; and transmitting the virtual address to the host, wherein the virtual address is converted into a physical address by the host, the physical address obtained by conversion is mapped to a memory address in the host, and a data transmission request is written into a target queue associated with the memory address, wherein the data transmission request is read from the target queue by a storage device, and the storage device is used for responding to the data transmission request and transmitting data to be transmitted.
Example 7
Embodiments of the application may provide an electronic device that may include a memory and a processor.
Fig. 12 is a block diagram of an electronic device of a data transmission method according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 12, the apparatus 1200 includes a computing unit 1201, which may perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1202 or a computer program loaded from a storage unit 1208 into a Random Access Memory (RAM) 1203. In the RAM1203, various programs and data required for the operation of the device 1200 may also be stored. The computing unit 1201, the ROM1202, and the RAM1203 are connected to each other via a bus 1204. An input/output (I/O) interface 1205 is also connected to the bus 1204.
Various components in device 1200 are connected to I/O interface 1205, including: an input unit 1206 such as a keyboard, mouse, etc.; an output unit 1204 such as various types of displays, speakers, and the like; a storage unit 1208 such as a magnetic disk, an optical disk, or the like; and a communication unit 1209, such as a network card, modem, wireless communication transceiver, etc. The communication unit 1209 allows the device 1200 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks.
The computing unit 1201 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1201 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The computing unit 1201 performs the various methods and processes described above, such as the verification method of data. For example, in some embodiments, the method of verifying data may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 1208. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 1200 via ROM1202 and/or communication unit 1209. When a computer program is loaded into the RAM1203 and executed by the computing unit 1201, one or more steps of the above-described verification method of data may be performed. Alternatively, in other embodiments, the computing unit 1201 may be configured to perform the method of transmission of data in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present application may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present application, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: other types of devices may also be used to provide interaction with a user, for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback), and input from the user may be received in any form (including acoustic input, speech input, or tactile input).
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be noted that, the foregoing reference numerals of the embodiments of the present application are merely for describing the embodiments, and do not represent the advantages and disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and are merely a logical functional division, and there may be other manners of dividing the apparatus in actual implementation, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method of the various embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (14)

1. A method of transmitting data, comprising:
acquiring a storage queue written with a data transmission request to a virtual machine, wherein the data transmission request is used for requesting a storage device to transmit data to be transmitted;
converting a virtual address of the storage queue in the virtual machine into a physical address;
mapping the physical address obtained by conversion to a memory address in a host;
and writing the data transmission request into a target queue associated with the memory address, wherein the data transmission request is read from the target queue by the storage equipment, and the storage equipment is used for responding to the data transmission request and transmitting the data to be transmitted.
2. The method of claim 1, wherein mapping the translated physical address to a memory address in a host comprises:
and determining the memory address with the target mapping relation with the physical address obtained by conversion in a mapping relation model, wherein the mapping relation model is used for representing the mapping relation between different physical addresses and corresponding memory addresses.
3. The method according to claim 2, wherein the method further comprises:
and calling a target program to establish the target mapping relation between the physical address and the memory address in response to the memory address which does not comprise the physical address obtained by conversion in the mapping relation model, and adding the target mapping relation into the mapping relation model.
4. The method of claim 1, wherein mapping the translated physical address to a memory address in a host comprises:
and mapping the physical address obtained through conversion to the memory address in the host by using simulation resources of a virtual storage device, wherein the simulation resources are used for simulating processing resources of the storage device.
5. The method according to claim 4, wherein the method further comprises:
determining storage information of the analog resources of the virtual storage device in the host in the data transmission request;
the simulated resources stored based on the stored information are determined in the virtual storage device.
6. The method of claim 5, wherein writing the data transfer request to the target queue associated with the memory address comprises:
And writing the data transmission request into the target queue associated with the memory address in response to the storage space in the storage information being within the range of the storage space accessible to the virtual machine.
7. The method according to any one of claims 1 to 6, further comprising:
creating the target queue and mapping the target queue to the virtual machine.
8. The method according to any one of claims 1 to 6, further comprising:
in response to the virtual machine booting, direct memory access remapping information is configured in the host, wherein the direct memory access remapping information is used to cause the storage device to request from the host, without a processor of the host, to transfer data from the virtual machine.
9. A method of transmitting data, comprising:
writing a data transmission request of the virtual machine into a storage queue, wherein the data transmission request is used for requesting a storage device to transmit data to be transmitted;
determining a virtual address of the storage queue in the virtual machine;
and transmitting the virtual address to a host, wherein the virtual address is converted into a physical address by the host, the physical address obtained by conversion is mapped to a memory address in the host, and the data transmission request is written into a target queue associated with the memory address, wherein the data transmission request is read from the target queue by a storage device, and the storage device is used for responding to the data transmission request and transmitting the data to be transmitted.
10. The method according to claim 9, wherein the method further comprises:
the store queue is created based on the target queue.
11. A data transmission system, comprising:
the virtual machine is used for writing a data transmission request into the storage queue, wherein the data transmission request is used for requesting the storage equipment to transmit data to be transmitted;
the host is used for converting the virtual address of the storage queue in the virtual machine into a physical address, mapping the physical address obtained by conversion to a memory address in the host, and writing the data transmission request into a target queue associated with the memory address;
the storage device is used for reading the data transmission request from the target queue and responding to the data transmission request to transmit the data to be transmitted.
12. The system of claim 11, wherein the host comprises:
a virtual storage device created by a monitor of the virtual machine and configured to provide an emulation resource to map the physical address to the memory address in the host, wherein the emulation resource is configured to emulate a processing resource of the storage device.
13. The system of claim 12, wherein the virtual machine comprises: a virtual machine driver for performing at least one of:
adding storage information of the analog resources of the virtual storage device in the host to the data transmission request;
initializing the virtual storage device;
the store queue is created based on the target queue.
14. An electronic device, comprising: a memory and a processor; the memory is configured to store computer executable instructions, the processor being configured to execute the computer executable instructions, which when executed by the processor, implement the steps of the method of any one of claims 1 to 10.
CN202310652971.3A 2023-06-02 2023-06-02 Data transmission method, system and electronic equipment Pending CN116662223A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310652971.3A CN116662223A (en) 2023-06-02 2023-06-02 Data transmission method, system and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310652971.3A CN116662223A (en) 2023-06-02 2023-06-02 Data transmission method, system and electronic equipment

Publications (1)

Publication Number Publication Date
CN116662223A true CN116662223A (en) 2023-08-29

Family

ID=87716796

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310652971.3A Pending CN116662223A (en) 2023-06-02 2023-06-02 Data transmission method, system and electronic equipment

Country Status (1)

Country Link
CN (1) CN116662223A (en)

Similar Documents

Publication Publication Date Title
CN110063051B (en) System and method for reconfiguring server and server
US9552233B1 (en) Virtual machine migration using free page hinting
JP5608243B2 (en) Method and apparatus for performing I / O processing in a virtual environment
US10176007B2 (en) Guest code emulation by virtual machine function
KR102321913B1 (en) Non-volatile memory device, and memory system having the same
US20180336158A1 (en) Systems and methods for data transfer with coherent and non-coherent bus topologies and attached external memory
KR101823888B1 (en) Multinode hubs for trusted computing
TW201443782A (en) Method and system for single root input/output virtualization virtual function sharing on multi-hosts
CN103034524A (en) Paravirtualized virtual GPU
US10503922B2 (en) Systems and methods for hardware-based security for inter-container communication
US20140359267A1 (en) Method of controlling computer system and computer system
CN113312140B (en) System, storage medium, and method for virtual trusted platform module
US10296369B2 (en) Systems and methods for protocol termination in a host system driver in a virtualized software defined storage architecture
EP4053706A1 (en) Cross address-space bridging
US10268595B1 (en) Emulating page modification logging for a nested hypervisor
US10235195B2 (en) Systems and methods for discovering private devices coupled to a hardware accelerator
CN115988218A (en) Virtualized video coding and decoding system, electronic equipment and storage medium
US10713081B2 (en) Secure and efficient memory sharing for guests
US20180335956A1 (en) Systems and methods for reducing data copies associated with input/output communications in a virtualized storage environment
CN114662088A (en) Techniques for providing access to kernel and user space memory regions
US10467078B2 (en) Crash dump extraction of guest failure
CN115202827A (en) Method for processing virtualized interrupt, interrupt controller, electronic device and chip
US20170090965A1 (en) Dirty Memory Tracking with Assigned Devices by Exitless Paravirtualization
US10776145B2 (en) Systems and methods for traffic monitoring in a virtualized software defined storage architecture
US20230205560A1 (en) Selective memory deduplication for virtualized computer systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination