CN114625481A - Data processing method and device, readable medium and electronic equipment - Google Patents

Data processing method and device, readable medium and electronic equipment Download PDF

Info

Publication number
CN114625481A
CN114625481A CN202210287992.5A CN202210287992A CN114625481A CN 114625481 A CN114625481 A CN 114625481A CN 202210287992 A CN202210287992 A CN 202210287992A CN 114625481 A CN114625481 A CN 114625481A
Authority
CN
China
Prior art keywords
data
shared
request message
queue
target data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210287992.5A
Other languages
Chinese (zh)
Other versions
CN114625481B (en
Inventor
谢永吉
张佳辰
邓良
厉航靖
柴稳
张宇
王剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Youzhuju Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youzhuju Network Technology Co Ltd filed Critical Beijing Youzhuju Network Technology Co Ltd
Priority to CN202210287992.5A priority Critical patent/CN114625481B/en
Publication of CN114625481A publication Critical patent/CN114625481A/en
Priority to PCT/CN2023/082365 priority patent/WO2023179508A1/en
Application granted granted Critical
Publication of CN114625481B publication Critical patent/CN114625481B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure relates to a data processing method, an apparatus, a readable medium, and an electronic device, the method comprising: receiving a data request message triggered by a user through a virtual machine, wherein the data request message is used for requesting to process target data on a host machine running the virtual machine; according to the data request message, shared data information corresponding to target data is determined, wherein the shared data information comprises a shared queue, and the shared queue is used for directly carrying out data interaction between the virtual machine and the host machine; and processing the target data through the kernel of the host machine according to the data request message and the shared queue. That is to say, the virtual machine and the host machine of the present disclosure can directly perform data interaction by sharing data information, and do not need to transfer through a Virtiofs Daemon of the host machine, and a kernel of the host machine can directly obtain a data processing request sent by the virtual machine, so that additional system call overhead can be avoided, thereby improving the efficiency of virtual machine data processing.

Description

Data processing method and device, readable medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a data processing method, an apparatus, a readable medium, and an electronic device.
Background
With the rise of the cloud native concept, the application of the container in the cloud environment is increasingly widespread. The mirror image directory corresponding to the container is usually stored in a host, and when the secure container accesses the corresponding mirror image file, the secure container needs to be implemented by a virtual machine-host shared directory technology, such as a Virtiofs technology.
However, a file access request in the Virtiofs technology needs to be transferred once through a user-mode Virtiofs Daemon (Daemon), which brings extra system call overhead and results in low efficiency of virtual machine data processing.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, the present disclosure provides a data processing method, the method comprising:
receiving a data request message triggered by a user through a virtual machine, wherein the data request message is used for requesting to process target data on a host machine running the virtual machine;
according to the data request message, determining shared data information corresponding to the target data, wherein the shared data information comprises a shared queue, and the shared queue is used for directly carrying out data interaction between the virtual machine and the host machine;
and processing the target data through the kernel of the host machine according to the data request message and the shared queue.
In a second aspect, the present disclosure provides a data processing apparatus, the apparatus comprising:
the data request message receiving module is used for receiving a data request message triggered by a user through a virtual machine, wherein the data request message is used for requesting to process target data on a host machine running the virtual machine;
a shared data information determining module, configured to determine, according to the data request message, shared data information corresponding to the target data, where the shared data information includes a shared queue, and the shared queue is used for directly performing data interaction between the virtual machine and the host;
and the data processing module is used for processing the target data through the kernel of the host machine according to the data request message and the shared queue.
In a third aspect, the present disclosure provides a computer readable medium having stored thereon a computer program which, when executed by a processing apparatus, performs the steps of the method of the first aspect of the present disclosure.
In a fourth aspect, the present disclosure provides an electronic device comprising:
a storage device having at least one computer program stored thereon;
at least one processing device for executing the at least one computer program in the storage device to implement the steps of the method of the first aspect of the disclosure.
According to the technical scheme, the data request message triggered by the virtual machine through a user is received, and the data request message is used for requesting to process target data on a host machine running the virtual machine; determining shared data information corresponding to the target data according to the data request message, wherein the shared data information comprises a shared queue, and the shared queue is used for directly carrying out data interaction between the virtual machine and the host machine; and processing the target data through the kernel of the host machine according to the data request message and the shared queue. That is to say, the virtual machine and the host machine of the present disclosure can directly perform data interaction by sharing data information, and do not need to transfer through a Virtiofs Daemon of the host machine, and a kernel of the host machine can directly obtain a data processing request sent by the virtual machine, so that additional system call overhead can be avoided, thereby improving the efficiency of virtual machine data processing.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale. In the drawings:
FIG. 1 is a schematic diagram illustrating a file access according to an exemplary embodiment of the present disclosure;
FIG. 2 is a flow chart illustrating a method of data processing according to an exemplary embodiment of the present disclosure;
FIG. 3 is a flow chart illustrating another method of data processing according to an exemplary embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating a shared mapping space in accordance with an exemplary embodiment of the present disclosure;
FIG. 5 is a data processing schematic shown in accordance with an exemplary embodiment of the present disclosure;
FIG. 6 is a block diagram illustrating a data processing apparatus according to an exemplary embodiment of the present disclosure;
FIG. 7 is a block diagram illustrating a second type of data processing apparatus according to an exemplary embodiment of the present disclosure;
FIG. 8 is a block diagram illustrating a third data processing apparatus according to an exemplary embodiment of the present disclosure;
FIG. 9 is a block diagram illustrating a fourth data processing apparatus according to an exemplary embodiment of the present disclosure;
fig. 10 is a block diagram illustrating a fifth data processing apparatus according to an exemplary embodiment of the present disclosure;
fig. 11 is a block diagram illustrating a sixth data processing apparatus according to an exemplary embodiment of the present disclosure;
fig. 12 is a block diagram illustrating an electronic device according to an exemplary embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based at least in part on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
First, an application scenario of the present disclosure will be explained. Fig. 1 is a schematic view illustrating file access according to an exemplary embodiment of the present disclosure, as shown in fig. 1, in the Virtiofs technology, a FUSE module of a kernel of a virtual machine is used to intercept a file access request, the file access request is forwarded to a Virtiofs Daemon of a host machine through a virtios mechanism, and then a corresponding file on the host machine is operated by the Virtiofs Daemon, so that the virtual machine can access a local file of the host machine. However, a file access request in the Virtiofs technology needs to be relayed once through a user-mode Virtiofs Daemon (Daemon), which brings extra system call overhead and results in low efficiency of virtual machine data processing.
In order to solve the existing problems, the present disclosure provides a data processing method, an apparatus, a readable medium and an electronic device, where a virtual machine and a host may directly perform data interaction through shared data information without transferring through a Virtiofs Daemon of the host, and a kernel of the host may directly obtain a data processing request sent by the virtual machine, so that extra system call overhead may be avoided, thereby improving the efficiency of data processing of the virtual machine.
The present disclosure is described below with reference to specific examples.
Fig. 2 is a flowchart illustrating a data processing method according to an exemplary embodiment of the present disclosure, which may include, as shown in fig. 2:
s201, receiving a data request message triggered by a user through a virtual machine.
The data request message may be used to request processing of target data on a host running the virtual machine, and the data request message may include a data identifier of the target data. Illustratively, the target data may be a target file, the data request message may be a request to access the target file on the host, and the data request message may include a file identification of the target file. In this embodiment, the target data is described as a file type.
In this step, in the process of operating the virtual machine software by the user, if the access operation to the target data on the host machine running the virtual machine is triggered, the data request message may be generated according to the data identifier of the target data and the type of the access operation.
And S202, determining shared data information corresponding to the target data according to the data request message.
The shared data information may include a shared queue, and the shared queue may be used for data interaction between the virtual machine and the host directly. The shared data information may be stored in a shared mapping space that is accessible to both the virtual machine and the host machine.
In this step, after receiving the data request message, the shared data information corresponding to the target data may be determined from the shared mapping space according to the data request message.
S203, processing the target data through the kernel of the host according to the data request message and the shared queue.
The shared queue may include a pending queue, and the shared data information further includes an address mapping table.
In this step, after determining the shared queue corresponding to the target data, the data request message may be sent to a queue to be processed in the shared queue and notify the host, and the kernel of the host may extract the data request message from the queue to be processed and process the target data according to the data request message.
In a possible implementation manner, before the target data is processed by the kernel of the host according to the data request message and the shared queue, a virtual address of the host corresponding to a physical address of the virtual machine may be determined according to the address mapping table; the address mapping table may include a correspondence between a physical address of the virtual machine and a virtual address of the host machine; and processing the target data through the kernel of the host machine according to the virtual address, the data request message and the shared queue.
For example, after determining the virtual address of the host corresponding to the physical address of the virtual machine, the data request message may be sent to the queue to be processed and notified to the host, and the kernel of the host may extract the data request message from the queue to be processed and process the target data according to the virtual address and the data request message.
By adopting the method, the virtual machine and the host machine can carry out data interaction through shared data information without transferring through a Virtiofs Daemon (Daemon process) of the host machine, and the kernel of the host machine can directly acquire the data processing request sent by the virtual machine, so that extra system calling overhead can be avoided, and the data processing efficiency of the virtual machine is improved.
Fig. 3 is a flowchart illustrating another data processing method according to an exemplary embodiment of the present disclosure, which may include, as shown in fig. 3:
s301, receiving a data request message triggered by a user through a virtual machine.
The data request message may be used to request processing of target data on a host running the virtual machine, and the data request message may include a data identifier of the target data. Illustratively, the target data may be a target file, the data request message may be a request to access the target file on the host, and the data request message may include a file identification of the target file. In this embodiment, the target data is described as a file type.
S302, determining whether a shared queue corresponding to the target data exists in the shared mapping space, and if it is determined that the shared queue corresponding to the target data does not exist in the shared mapping space, performing steps S303 to S306, and if it is determined that the shared queue corresponding to the target data exists in the shared mapping space, performing steps S307 to S314.
The shared data information may include an address mapping table and a shared queue, the shared queue may be used for data interaction between the virtual machine and the host, and the shared queue may include a pending queue and a completion queue. The shared data information may be stored in the shared mapping space, which is accessible to both the virtual machine and the host. Illustratively, the shared mapping space may be a BAR (Base Address Register) in a PCI (Peripheral Component Interconnect) device corresponding to the Virtiofs of the virtual machine. The shared mapping space may include two regions, one region is used to store the address mapping table, another region may include a plurality of mapping block regions, the mapping block regions are used to store mapping information of opened files, the mapping information may include a pending queue, a completion queue, and a notification region, and different files correspond to different mapping block regions.
It should be noted that, in the process of starting the Virtual Machine, a corresponding address mapping table may be created for the Virtual Machine through a VMM (Virtual Machine Monitor), a corresponding EPT (Extended Page Tables) mapping is created, and the address mapping table is bound to a region for storing the address mapping table in the shared mapping space.
Fig. 4 is a schematic diagram of a shared mapping space according to an exemplary embodiment of the disclosure, as shown in fig. 4, the size of the shared mapping space is 1GB, the size of the space occupied by the address mapping table is 1MB, the address mapping table may include a plurality of entries, each entry includes 3 fields of 64 bits: the physical address of the virtual machine, the virtual address of the host machine and the mapping length. The size of the space occupied by each map block region is 128KB, wherein the size of the space occupied by the pending queue is 64KB, the size of the space occupied by the completion queue is 8KB, the maximum depth of the pending queue and the completion queue can be 1024, the size of the space occupied by the notification region is 4KB, and the remaining space in the map block region can be used as a padding region (padding) for data alignment processing.
In this step, after receiving a data processing request message triggered by a user, a Virtiofs module of the virtual machine kernel may determine whether a mapping block region corresponding to the target data exists in the shared mapping space, and when it is determined that the mapping block region corresponding to the target data exists in the shared mapping space, it indicates that a shared queue corresponding to the target data exists in the shared mapping space, and when it is determined that the mapping block region corresponding to the target data does not exist in the shared mapping space, it indicates that the shared queue corresponding to the target data does not exist in the shared mapping space.
And S303, determining a mapping block area corresponding to the target data from the shared mapping space.
In this step, when it is determined that the shared queue corresponding to the target data does not exist in the shared mapping space, it indicates that the mapping block region corresponding to the target data does not exist in the shared mapping space, in this case, the Virtiofs module of the virtual machine kernel may send a mapping information creation request message to the Virtiofs Daemon (Daemon process) through a preset FUSE command, the Virtiofs Daemon (Daemon process) may send a mapping information creation request message to the VMM after receiving the mapping information creation request message, and the VMM may take any space mapping block region in the shared mapping space as the mapping block region corresponding to the target data after receiving the mapping information creation request message. For example, the mapping block region in the shared mapping space may be traversed, and the traversed first free mapping block region may be used as the mapping block region corresponding to the target data.
S304, initializing the mapping block area.
In this step, after determining the mapping block region corresponding to the target data, the VMM may perform initialization processing on the mapping block region, and allocate a corresponding space to the mapping block region.
S305, creating a shared queue corresponding to the target data in the mapping block area.
In this step, after the VMM initializes the mapping block region, the queue to be processed and the completion queue may be initialized through an io _ ending _ setup () interface, and a corresponding EPT mapping is established, and the shared queue is bound to the mapping block region.
Further, after the shared queue corresponding to the target data is created, shared data information corresponding to the target data and a mapping block region corresponding to the shared data information may be obtained, then, a data identifier corresponding to the target data may be determined, a corresponding relationship between the data identifier and the mapping block region may be established, and the corresponding relationship may be sent to the Virtiofs module of the virtual machine kernel.
S306, adding the target data and the shared queue corresponding to the target data to a preset queue association relationship.
The preset queue association relationship may include a correspondence relationship between different data and the shared queue.
In this step, after the shared queue corresponding to the target data is created in the mapping block region, the target data and the shared queue corresponding to the target data may be added to the preset queue association relationship, and in addition, after the target data is closed, the target data in the preset queue association relationship and the shared queue corresponding to the target data may be deleted.
It should be noted that, in step S302, it may also be determined whether a shared queue corresponding to the target data exists in the shared mapping space through the preset queue association relationship, for example, in a case where it is determined that the preset queue association relationship includes the target data, it is determined that a shared queue corresponding to the target data exists in the shared mapping space, and in a case where it is determined that the preset queue association relationship does not include the target data, it is determined that a shared queue corresponding to the target data does not exist in the shared mapping space.
And S307, determining shared data information corresponding to the target data according to the data request message.
In this step, when it is determined that the shared queue corresponding to the target data exists in the shared mapping space, the data identifier in the data request message may be determined, and a target mapping block area corresponding to the data identifier is determined according to a correspondence between a pre-established data identifier and the mapping block area, where shared data information stored in the target mapping block area is shared data information corresponding to the target data.
S308, according to the address mapping table, determining the virtual address of the host machine corresponding to the physical address of the virtual machine.
In this step, after the shared data information corresponding to the target data is determined, the virtual address of the host corresponding to the physical address of the virtual machine may be determined according to the address mapping table in the shared data information, so as to modify the physical address of the virtual machine corresponding to the data memory of the target data into the virtual address of the host.
S309, determining a shared data request message corresponding to the data request message according to a preset data format.
The preset data format may be a data format that can be processed by the host kernel.
In this step, after determining the shared data information corresponding to the target data, the corresponding information may be extracted from the data request message according to the preset data format, so as to obtain the shared data request message. For example, the preset data format may be a data structure corresponding to the shared data request message, and the shared data request message corresponding to the data request message may be obtained by extracting an element value corresponding to each element in the data structure from the data request message and filling the data structure with the element value.
And S310, sending the shared data request message to the queue to be processed.
S311, extracting the shared data request message from the queue to be processed through the host.
The mapping block area corresponding to the target data may include a notification area, and the notification area may not establish a corresponding EPT mapping.
In this step, after sending the shared data request message to the queue to be processed, the host may send a data processing request message to the host through the notification area, and after receiving the data processing request message, the host extracts the shared data request message from the queue to be processed. For example, the notification region may be used to implement the semantics of io _ ending _ entry (), where the system calls IORING _ ENTER _ SQ _ wake up, and in the case that the Virtual Machine performs an MMIO (Memory-mapped I/O) write operation on the notification region, the system may exit to a KVM (Kernel-based Virtual Machine) module, and execute an actual operation corresponding to io _ ending _ entry () through the KVM module to wake up an SQ process on the host to process the data processing request message in the queue to be processed. Thus, in a non-polling mode (e.g., interrupt mode), the data processing request message may be sent to the host through the io _ aging _ enter () interface.
It should be noted that, when the VMM calls io _ aging _ entry (), the ring _ SETUP _ sqpell flag bit needs to be set at the same time to create the corresponding kernel thread to asynchronously process the shared data request message. In addition, the VMM may call an io _ registering _ REGISTER () interface to perform an IORING _ REGISTER _ EVENTFD operation, and REGISTER EVENTFD corresponding to Virtiofs interrupt of the virtual machine to the ouring as a notification mode of the completion queue.
S312, processing the target data through the kernel of the host according to the virtual address and the shared data request message.
S313, after the target data is processed by the kernel of the host, sending the processing result to the completion queue.
In this step, after the kernel of the host finishes processing the target data, the processing result may be sent to the completion queue, and the Virtiofs module of the virtual machine is notified in an interrupt injection manner.
And S314, extracting a processing result corresponding to the target data from the completion queue through the virtual machine.
In this step, after receiving the data processing completion notification, the Virtiofs module of the virtual machine may extract the processing result corresponding to the target data from the completion queue.
It should be noted that, after the host machine has finished processing the target data, if it is determined that the access operation corresponding to the target data is cancelled, the Virtiofs of the virtual machine kernel may request, through a corresponding FUSE command, the Virtiofs Daemon to delete the binding between the target data and the mapping block region corresponding to the target data, cancel all the EPT mappings corresponding to the target data, release the address space corresponding to the target data, and delete the file description corresponding to the target data.
Fig. 5 is a schematic diagram illustrating data processing according to an exemplary embodiment of the present disclosure, where as shown in fig. 5, thick lines represent data interactions during data processing, and thin lines represent control interactions during data processing. The shared data information is stored in the shared mapping space, the virtual machine can send a request to a virtual machine monitor through Virtiofs Daemon (Daemon process) to create the shared data information in the shared mapping space and create corresponding EPT mapping, so that the virtual machine can directly send a data processing request to the host kernel through the shared mapping space, and after the host processes the data processing request, a processing result can be sent to the virtual machine through the shared mapping space.
By adopting the method, the address mapping table, the queue to be processed and the completion queue on the host are mapped on the kernel of the virtual machine, so that the data request message of the virtual machine can be directly submitted to the kernel of the host in an iouring mode, the transfer through a Virtiofs Daemon (Daemon process) of the host is not needed, the kernel of the host can directly acquire the data processing request sent by the virtual machine, and thus, the extra system call overhead can be avoided, and the data processing efficiency of the virtual machine is improved.
Fig. 6 is a block diagram illustrating a data processing apparatus according to an exemplary embodiment of the present disclosure, which may include, as shown in fig. 6:
a data request message receiving module 601, configured to receive a data request message triggered by a user through a virtual machine, where the data request message is used to request processing of target data on a host running the virtual machine;
a shared data information determining module 602, configured to determine, according to the data request message, shared data information corresponding to the target data, where the shared data information includes a shared queue, and the shared queue is used for performing data interaction between the virtual machine and a kernel of the host directly;
a data processing module 603, configured to process the target data through a kernel of the host according to the data request message and the shared queue.
Optionally, the shared data information further includes an address mapping table, and fig. 7 is a block diagram of a second data processing apparatus according to an exemplary embodiment of the disclosure, as shown in fig. 7, the apparatus further includes:
a virtual address determining module 604, configured to determine, according to the address mapping table, a virtual address of the host corresponding to the physical address of the virtual machine; the address mapping table comprises the corresponding relation between the physical address of the virtual machine and the virtual address of the host machine;
the data processing module 603 is further configured to:
and processing the target data through the kernel of the host according to the virtual address, the data request message and the shared queue.
Optionally, the shared data information is stored in a shared mapping space, and both the virtual machine and the host machine can access the shared mapping space; fig. 8 is a block diagram illustrating a third data processing apparatus according to an exemplary embodiment of the present disclosure, which further includes, as shown in fig. 8:
a shared queue determining module 605, configured to determine whether a shared queue corresponding to the target data exists in the shared mapping space;
the shared data information determining module 602 is further configured to:
and under the condition that the shared queue corresponding to the target data exists in the shared mapping space, determining the shared data information corresponding to the target data according to the data request message.
Optionally, the shared mapping space includes a plurality of mapping block regions, and fig. 9 is a block diagram of a fourth data processing apparatus according to an exemplary embodiment of the disclosure, where as shown in fig. 9, the apparatus further includes:
a mapping block region determining module 606, configured to determine, when it is determined that the shared queue corresponding to the target data does not exist in the shared mapping space, a mapping block region corresponding to the target data from the shared mapping space;
an initialization module 607 for initializing the mapping block region;
a shared queue creating module 608, configured to create a shared queue corresponding to the target data in the mapping block area.
Alternatively, fig. 10 is a block diagram of a fifth data processing apparatus according to an exemplary embodiment of the disclosure, as shown in fig. 10, the apparatus further includes:
an adding module 609, configured to add the target data and the shared queue corresponding to the target data to a preset queue association relationship, where the preset queue association relationship includes a correspondence relationship between different data and a shared queue.
Optionally, the shared queue comprises a pending queue; the data processing module 604 is further configured to: determining a shared data request message corresponding to the data request message according to a preset data format;
sending the shared data request message to the queue to be processed;
extracting the shared data request message from the queue to be processed through the host;
and processing the target data through the kernel of the host machine according to the virtual address and the shared data request message.
Optionally, the shared data information further includes a completion queue, and fig. 11 is a block diagram of a sixth data processing apparatus according to an exemplary embodiment of the present disclosure, as shown in fig. 11, the apparatus further includes:
a result sending module 610, configured to send a processing result to the completion queue after the target data is processed by the kernel of the host;
the result extracting module 611 is configured to extract, by the virtual machine, a processing result corresponding to the target data from the completion queue.
Optionally, the mapping block region includes a notification region; the device also includes:
the notification module is used for sending a data processing request message to the host machine through the notification area;
the data processing module 604 is further configured to:
and after the host receives the data processing request message, extracting the shared data request message from the queue to be processed.
Through the device, the virtual machine and the host machine can perform data interaction through shared data information, transfer is not required to be performed through Virtiofs Daemon (Daemon process) of the host machine, and the kernel of the host machine can directly acquire a data processing request sent by the virtual machine, so that extra system calling overhead can be avoided, and the data processing efficiency of the virtual machine is improved.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Referring now to FIG. 12, shown is a schematic diagram of an electronic device 1200 suitable for use in implementing embodiments of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 12 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 12, the electronic device 1200 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 1201 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)1202 or a program loaded from a storage device 1208 into a Random Access Memory (RAM) 1203. In the RAM 1203, various programs and data necessary for the operation of the electronic apparatus 1200 are also stored. The processing apparatus 1201, the ROM 1202, and the RAM 1203 are connected to each other by a bus 1204. An input/output (I/O) interface 1205 is also connected to bus 1204.
Generally, the following devices may be connected to the I/O interface 1205: input devices 1206 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, or the like; output devices 1207 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, or the like; storage devices 1208 including, for example, magnetic tape, hard disk, etc.; and a communication device 1209. The communication device 1209 may allow the electronic apparatus 1200 to communicate wirelessly or by wire with other apparatuses to exchange data. While fig. 12 illustrates an electronic device 1200 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication device 1209, or installed from the storage device 1208, or installed from the ROM 1202. The computer program, when executed by the processing apparatus 1201, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a data request message triggered by a user through a virtual machine, wherein the data request message is used for requesting to process target data on a host machine running the virtual machine; according to the data request message, determining shared data information corresponding to the target data, wherein the shared data information comprises a shared queue, and the shared queue is used for directly carrying out data interaction between the virtual machine and the host machine; and processing the target data through the kernel of the host machine according to the data request message and the shared queue.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented by software or hardware. The name of the module does not constitute a limitation to the module itself in some cases, for example, the data request message receiving module may also be described as "a module that receives a data request message triggered by a user through a virtual machine, the data request message being for requesting processing of target data on a host machine running the virtual machine".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Example 1 provides, in accordance with one or more embodiments of the present disclosure, a data processing method, the method comprising: receiving a data request message triggered by a user through a virtual machine, wherein the data request message is used for requesting to process target data on a host machine running the virtual machine; according to the data request message, determining shared data information corresponding to the target data, wherein the shared data information comprises a shared queue, and the shared queue is used for directly carrying out data interaction between the virtual machine and the host machine; and processing the target data through the kernel of the host machine according to the data request message and the shared queue.
Example 2 provides the method of example 1, the shared data information further including an address mapping table, the method further including, before the processing the target data by the kernel of the host according to the data request message and the shared queue,:
determining a virtual address of the host machine corresponding to the physical address of the virtual machine according to the address mapping table; the address mapping table comprises a corresponding relation between a physical address of the virtual machine and a virtual address of the host machine;
the processing the target data through the kernel of the host according to the data request message and the shared queue includes:
and processing the target data through the kernel of the host machine according to the virtual address, the data request message and the shared queue.
Example 3 provides the method of example 2, the shared data information stored in a shared mapping space accessible to both the virtual machine and the host machine, in accordance with one or more embodiments of the present disclosure; before determining, according to the data request message, shared data information corresponding to the target data, the method further includes: determining whether a shared queue corresponding to the target data exists in the shared mapping space; the determining, according to the data request message, shared data information corresponding to the target data includes: and under the condition that the shared queue corresponding to the target data exists in the shared mapping space, determining the shared data information corresponding to the target data according to the data request message.
Example 4 provides the method of example 3, the shared mapping space comprising a plurality of mapping block regions, the method further comprising: under the condition that the shared queue corresponding to the target data does not exist in the shared mapping space, determining a mapping block region corresponding to the target data from the shared mapping space; initializing the mapping block region; and creating a shared queue corresponding to the target data in the mapping block area.
Example 5 provides the method of example 4, further comprising, in accordance with one or more embodiments of the present disclosure: and adding the target data and a shared queue corresponding to the target data to a preset queue incidence relation, wherein the preset queue incidence relation comprises the corresponding relation between different data and the shared queue.
Example 6 provides the method of example 4, the shared queue comprising a pending queue; the processing the target data through the kernel of the host according to the virtual address, the data request message and the shared queue includes: determining a shared data request message corresponding to the data request message according to a preset data format; sending the shared data request message to the queue to be processed; extracting, by the host, the shared data request message from the pending queue; and processing the target data through the kernel of the host machine according to the virtual address and the shared data request message.
Example 7 provides the method of example 6, the sharing data information further including a completion queue, the method further including: after the target data is processed through the kernel of the host machine, a processing result is sent to the completion queue; and extracting a processing result corresponding to the target data from the completion queue through the virtual machine.
Example 8 provides the method of example 6, the map block region comprising a notification region, in accordance with one or more embodiments of the present disclosure; before the extracting, by the host, the shared data request message from the pending queue, the method further comprises: sending a data processing request message to the host machine through the notification area; the extracting, by the host, the shared data request message from the pending queue comprises: and after the host receives the data processing request message, extracting the shared data request message from the queue to be processed.
Example 9 provides, in accordance with one or more embodiments of the present disclosure, a data processing apparatus, the apparatus comprising: the data request message receiving module is used for receiving a data request message triggered by a user through a virtual machine, wherein the data request message is used for requesting to process target data on a host machine running the virtual machine; a shared data information determining module, configured to determine, according to the data request message, shared data information corresponding to the target data, where the shared data information includes a shared queue, and the shared queue is used for performing data interaction directly between the virtual machine and a kernel of the host; and the data processing module is used for processing the target data through the kernel of the host machine according to the data request message and the shared queue.
Example 10 provides the apparatus of example 9, the shared data information further including an address mapping table, the apparatus further including: a virtual address determining module, configured to determine, according to the address mapping table, a virtual address of the host corresponding to a physical address of the virtual machine; the address mapping table comprises a corresponding relation between a physical address of the virtual machine and a virtual address of the host machine; the data processing module is further configured to: and processing the target data through the kernel of the host machine according to the virtual address, the data request message and the shared queue.
Example 11 provides the apparatus of example 10, the shared data information stored in a shared mapping space accessible to both the virtual machine and the host, in accordance with one or more embodiments of the present disclosure; the device further comprises: a shared queue determining module, configured to determine whether a shared queue corresponding to the target data exists in the shared mapping space; the shared data information determining module is further configured to: and under the condition that the shared queue corresponding to the target data exists in the shared mapping space, determining the shared data information corresponding to the target data according to the data request message.
Example 12 provides the apparatus of example 11, the shared mapping space comprising a plurality of mapping block regions, the apparatus further comprising: a mapping block area determining module, configured to determine, when it is determined that a shared queue corresponding to the target data does not exist in the shared mapping space, a mapping block area corresponding to the target data from the shared mapping space; an initialization module, configured to initialize the mapping block region; and the shared queue creating module is used for creating a shared queue corresponding to the target data in the mapping block area.
Example 13 provides the apparatus of example 12, the apparatus further comprising, in accordance with one or more embodiments of the present disclosure: and the adding module is used for adding the target data and the shared queue corresponding to the target data to a preset queue incidence relation, wherein the preset queue incidence relation comprises the corresponding relation between different data and the shared queue.
Example 14 provides the apparatus of example 12, the shared queue comprising a pending queue, in accordance with one or more embodiments of the present disclosure; the data processing module is further configured to: determining a shared data request message corresponding to the data request message according to a preset data format; sending the shared data request message to the queue to be processed; extracting, by the host, the shared data request message from the pending queue; and processing the target data through the kernel of the host machine according to the virtual address and the shared data request message.
Example 15 provides the apparatus of example 14, the shared data information further including a completion queue, the apparatus further including: the result sending module is used for sending a processing result to the completion queue after the target data is processed by the kernel of the host machine; and the result extraction module is used for extracting a processing result corresponding to the target data from the completion queue through the virtual machine.
Example 16 provides the apparatus of example 14, the map block region comprising a notification region, in accordance with one or more embodiments of the present disclosure; the device further comprises: the notification module is used for sending a data processing request message to the host machine through the notification area; the data processing module is further configured to: and after the host receives the data processing request message, extracting the shared data request message from the queue to be processed.
Example 17 provides a computer readable medium having stored thereon a computer program that, when executed by a processing apparatus, performs the steps of the method of any of examples 1-8, in accordance with one or more embodiments of the present disclosure.
Example 18 provides, in accordance with one or more embodiments of the present disclosure, an electronic device, comprising: a storage device having a computer program stored thereon; processing means for executing said computer program in said storage means to carry out the steps of the method of any of examples 1-8.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and the technical features disclosed in the present disclosure (but not limited to) having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.

Claims (11)

1. A method of data processing, the method comprising:
receiving a data request message triggered by a user through a virtual machine, wherein the data request message is used for requesting to process target data on a host machine running the virtual machine;
according to the data request message, determining shared data information corresponding to the target data, wherein the shared data information comprises a shared queue, and the shared queue is used for directly carrying out data interaction between the virtual machine and the host machine;
and processing the target data through the kernel of the host machine according to the data request message and the shared queue.
2. The method of claim 1, wherein the shared data information further comprises an address mapping table, and wherein before the processing the target data by the kernel of the host according to the data request message and the shared queue, the method further comprises:
determining a virtual address of the host machine corresponding to the physical address of the virtual machine according to the address mapping table; the address mapping table comprises a corresponding relation between a physical address of the virtual machine and a virtual address of the host machine;
the processing the target data through the kernel of the host according to the data request message and the shared queue includes:
and processing the target data through the kernel of the host machine according to the virtual address, the data request message and the shared queue.
3. The method of claim 2, wherein the shared data information is stored in a shared mapping space that is accessible to both the virtual machine and the host machine; before determining, according to the data request message, shared data information corresponding to the target data, the method further includes:
determining whether a shared queue corresponding to the target data exists in the shared mapping space;
the determining, according to the data request message, shared data information corresponding to the target data includes:
and under the condition that the shared queue corresponding to the target data exists in the shared mapping space, determining the shared data information corresponding to the target data according to the data request message.
4. The method of claim 3, wherein the shared mapping space comprises a plurality of mapping block regions, the method further comprising:
under the condition that the shared queue corresponding to the target data does not exist in the shared mapping space, determining a mapping block region corresponding to the target data from the shared mapping space;
initializing the mapping block region;
and creating a shared queue corresponding to the target data in the mapping block area.
5. The method of claim 4, further comprising:
and adding the target data and a shared queue corresponding to the target data to a preset queue incidence relation, wherein the preset queue incidence relation comprises the corresponding relation between different data and the shared queue.
6. The method of claim 4, wherein the shared queue comprises a pending queue; the processing the target data through the kernel of the host according to the virtual address, the data request message and the shared queue includes:
determining a shared data request message corresponding to the data request message according to a preset data format;
sending the shared data request message to the queue to be processed;
extracting, by the host, the shared data request message from the pending queue;
and processing the target data through the kernel of the host machine according to the virtual address and the shared data request message.
7. The method of claim 6, wherein the shared data information further comprises a completion queue, the method further comprising:
after the target data is processed through the kernel of the host machine, a processing result is sent to the completion queue;
and extracting a processing result corresponding to the target data from the completion queue through the virtual machine.
8. The method of claim 6, wherein the map block region comprises a notification region; before the extracting, by the host, the shared data request message from the pending queue, the method further comprises:
sending a data processing request message to the host machine through the notification area;
the extracting, by the host, the shared data request message from the pending queue comprises:
and after the host receives the data processing request message, extracting the shared data request message from the queue to be processed.
9. A data processing apparatus, characterized in that the apparatus comprises:
the data request message receiving module is used for receiving a data request message triggered by a user through a virtual machine, wherein the data request message is used for requesting to process target data on a host machine running the virtual machine;
a shared data information determining module, configured to determine, according to the data request message, shared data information corresponding to the target data, where the shared data information includes a shared queue, and the shared queue is used for performing data interaction directly between the virtual machine and a kernel of the host;
and the data processing module is used for processing the target data through the kernel of the host machine according to the data request message and the shared queue.
10. A computer-readable medium, on which a computer program is stored, characterized in that the program, when being executed by processing means, carries out the steps of the method of any one of claims 1 to 8.
11. An electronic device, comprising:
a storage device having at least one computer program stored thereon;
at least one processing device for executing the at least one computer program in the storage device to carry out the steps of the method according to any one of claims 1 to 8.
CN202210287992.5A 2022-03-22 2022-03-22 Data processing method and device, readable medium and electronic equipment Active CN114625481B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210287992.5A CN114625481B (en) 2022-03-22 2022-03-22 Data processing method and device, readable medium and electronic equipment
PCT/CN2023/082365 WO2023179508A1 (en) 2022-03-22 2023-03-18 Data processing method and apparatus, readable medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210287992.5A CN114625481B (en) 2022-03-22 2022-03-22 Data processing method and device, readable medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN114625481A true CN114625481A (en) 2022-06-14
CN114625481B CN114625481B (en) 2024-04-05

Family

ID=81904903

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210287992.5A Active CN114625481B (en) 2022-03-22 2022-03-22 Data processing method and device, readable medium and electronic equipment

Country Status (2)

Country Link
CN (1) CN114625481B (en)
WO (1) WO2023179508A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115220911A (en) * 2022-06-17 2022-10-21 中科驭数(北京)科技有限公司 Resource management method, device, equipment and medium
CN116010127A (en) * 2023-02-24 2023-04-25 荣耀终端有限公司 Message processing method, device and storage medium
WO2023179508A1 (en) * 2022-03-22 2023-09-28 北京有竹居网络技术有限公司 Data processing method and apparatus, readable medium and electronic device

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117762572A (en) * 2024-01-03 2024-03-26 北京火山引擎科技有限公司 Unloading method and equipment for host and virtual machine shared directory file system
CN118037531A (en) * 2024-03-04 2024-05-14 北京七维视觉传媒科技有限公司 Texture sharing method based on UE5 and Windows memory
CN118034958A (en) * 2024-04-07 2024-05-14 阿里云计算有限公司 Task state notification system and method for multi-process scene

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101667144A (en) * 2009-09-29 2010-03-10 北京航空航天大学 Virtual machine communication method based on shared memory
CN111367472A (en) * 2020-02-28 2020-07-03 北京百度网讯科技有限公司 Virtualization method and device
CN111679921A (en) * 2020-06-09 2020-09-18 Oppo广东移动通信有限公司 Memory sharing method, memory sharing device and terminal equipment
CN113867993A (en) * 2021-12-03 2021-12-31 维塔科技(北京)有限公司 Virtualized RDMA method, system, storage medium and electronic device
CN114077480A (en) * 2022-01-19 2022-02-22 维塔科技(北京)有限公司 Method, device, equipment and medium for sharing memory between host and virtual machine

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9612973B2 (en) * 2013-11-09 2017-04-04 Microsoft Technology Licensing, Llc Using shared virtual memory resources for performing memory-mapping
CN114625481B (en) * 2022-03-22 2024-04-05 北京有竹居网络技术有限公司 Data processing method and device, readable medium and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101667144A (en) * 2009-09-29 2010-03-10 北京航空航天大学 Virtual machine communication method based on shared memory
CN111367472A (en) * 2020-02-28 2020-07-03 北京百度网讯科技有限公司 Virtualization method and device
CN111679921A (en) * 2020-06-09 2020-09-18 Oppo广东移动通信有限公司 Memory sharing method, memory sharing device and terminal equipment
CN113867993A (en) * 2021-12-03 2021-12-31 维塔科技(北京)有限公司 Virtualized RDMA method, system, storage medium and electronic device
CN114077480A (en) * 2022-01-19 2022-02-22 维塔科技(北京)有限公司 Method, device, equipment and medium for sharing memory between host and virtual machine

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023179508A1 (en) * 2022-03-22 2023-09-28 北京有竹居网络技术有限公司 Data processing method and apparatus, readable medium and electronic device
CN115220911A (en) * 2022-06-17 2022-10-21 中科驭数(北京)科技有限公司 Resource management method, device, equipment and medium
CN116010127A (en) * 2023-02-24 2023-04-25 荣耀终端有限公司 Message processing method, device and storage medium
CN116010127B (en) * 2023-02-24 2023-08-29 荣耀终端有限公司 Message processing method, device and storage medium
CN117009108A (en) * 2023-02-24 2023-11-07 荣耀终端有限公司 Message processing method, device and storage medium

Also Published As

Publication number Publication date
CN114625481B (en) 2024-04-05
WO2023179508A1 (en) 2023-09-28

Similar Documents

Publication Publication Date Title
CN114625481B (en) Data processing method and device, readable medium and electronic equipment
CN110389936B (en) Method, equipment and computer storage medium for starting small program
WO2014109502A1 (en) Touch event processing method and portable device implementing the same
CN108038112B (en) File processing method, mobile terminal and computer readable storage medium
CN111679921A (en) Memory sharing method, memory sharing device and terminal equipment
US11853767B2 (en) Inter-core data processing method, system on chip and electronic device
CN110704833A (en) Data permission configuration method, device, electronic device and storage medium
CN116257320B (en) DPU-based virtualization configuration management method, device, equipment and medium
CN116774933A (en) Virtualization processing method of storage device, bridging device, system and medium
WO2023174220A1 (en) Data processing method and apparatus, and readable medium and computing device
CN114637703B (en) Data access device, method, readable medium and electronic equipment
CN114691300A (en) Hot migration method of virtual machine instance
CN114625536A (en) Video memory allocation method, device, medium and electronic equipment
CN114201317A (en) Data transmission method, device, storage medium and electronic equipment
CN115470432A (en) Page rendering method and device, electronic equipment and computer readable medium
CN112835632A (en) Method and device for calling end capability and computer storage medium
CN112416303B (en) Software development kit hot repair method and device and electronic equipment
CN117608757A (en) Storage device control method and device, electronic device and storage medium
CN113391860B (en) Service request processing method and device, electronic equipment and computer storage medium
CN111026504B (en) Processing method and device for configuring instruction for acquiring processor information in virtual machine, CPU chip, system on chip and computer
CN116974732A (en) Memory processing method, device, terminal equipment and medium
CN108235822B (en) Virtual SIM card implementation method and device, storage medium and electronic equipment
CN117891624B (en) Inter-application communication method and device based on virtualization equipment and electronic equipment
CN110633141A (en) Memory management method and device of application program, terminal equipment and medium
CN113448550B (en) Method and device for realizing collection management of classes, electronic equipment and computer medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant