CN117369993A - Method for compatibly running different service systems in Linux environment and credit creation server - Google Patents

Method for compatibly running different service systems in Linux environment and credit creation server Download PDF

Info

Publication number
CN117369993A
CN117369993A CN202311180975.2A CN202311180975A CN117369993A CN 117369993 A CN117369993 A CN 117369993A CN 202311180975 A CN202311180975 A CN 202311180975A CN 117369993 A CN117369993 A CN 117369993A
Authority
CN
China
Prior art keywords
virtual
instruction
operating system
physical
linux
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311180975.2A
Other languages
Chinese (zh)
Inventor
***
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Weite Technology Co ltd
Original Assignee
Xiamen Weite Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Weite Technology Co ltd filed Critical Xiamen Weite Technology Co ltd
Priority to CN202311180975.2A priority Critical patent/CN117369993A/en
Publication of CN117369993A publication Critical patent/CN117369993A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Bus Control (AREA)

Abstract

The invention belongs to the technical field of computers, and discloses a method and a credit server for compatibly running different service systems in a Linux environment, wherein the method comprises the following steps: creating a kernel thread for each virtual CPU by using a process management mechanism of a Linux kernel, distributing a virtual register and a virtual memory, and mapping the virtual register and the virtual memory to a physical register and a physical memory by using a hardware virtualization technology so as to create a virtual CPU; acquiring a target instruction sent by a service system; the service system is deployed in OS virtual operation containers with different architectures; and translating the target instruction according to the current instruction translation set of the actual CPU to generate a local machine instruction, and executing the local machine instruction by the virtual CPU. In summary, the invention can provide a safe and effective encryption and decryption mechanism for the complete compatibility of the business application system.

Description

Method for compatibly running different service systems in Linux environment and credit creation server
Technical Field
The invention belongs to the technical field of computers, and particularly relates to a method and a credit server for compatibly running different service systems in a Linux environment.
Technical Field
In the advancing process of the credit engineering, the business application under the prior art architecture is difficult to realize comprehensive adaptation compatibility in a short period on the credit environment and ensure the continuity of business work while equipment is replaced, so that the problem that the credit system is in fusion compatibility with the traditional X86 system needs to be solved.
Application virtualization technology is generally adopted to solve the problem of compatible applications, namely, things are converted from one form to another, the most common virtualization technology is virtualization of memory in an operating system, the memory space required by a user in actual running may be far greater than the memory size of a physical host machine, and by using the memory virtualization technology, a part of hard disk can be virtualized into memory by the user, and the virtualization technology is transparent to the user. As another example, a secure, stable "tunnel" may be virtualized in a public network using virtual private network technology (VPN), and the user feels like using a private network.
Currently, the internationally common virtualization technology includes virtual machine technology and container technology.
A Virtual Machine (Virtual Machine) refers to a complete computer system that runs in a completely isolated environment with complete hardware system functionality through software emulation. Work that can be done in a physical computer can be done in a virtual machine. When creating a virtual machine in a computer, a part of hard disk and memory capacity of the physical machine are required to be used as the hard disk and memory capacity of the virtual machine. Each virtual machine has a separate CMOS, hard disk and operating system, and can operate as if it were a physical machine.
The virtual system has the same function as the real windows system by generating a brand new virtual mirror image of the existing operating system, and after entering the virtual system, all operations are carried out in the brand new independent virtual system, so that running software can be independently installed, data can be stored, the virtual system has an independent desktop of the virtual system, no influence is caused on the real system, and the virtual system has an operating system which can be flexibly switched between the existing system and the virtual mirror image.
Taking a Linux virtual machine as an example, the Linux virtual machine is a virtual Linux operating environment installed on Windows. It is in fact just a file, a virtual Linux environment, not an operating system in the real sense. But their actual effect is the same. So the installation on the virtual machine is used well. Currently, similar technology exists in the market, only virtual Linux operation environments on Windows are available, and no practically usable Windows operation environment is virtually available on Linux, mainly because many Windows technologies are not disclosed, and Linux is open-source.
The container technique effectively partitions resources of a single operating system into isolated groups to better balance conflicting resource usage requirements among the isolated groups. Container technology can create twice the number of virtual machine instances on the same server as before, and thus will undoubtedly reduce the overall system investment. Careful planning is necessary because the double number of instances also means double I/O load is placed on the servers running these instances. Currently, there are only isomorphic container technologies in the market under the same CPU architecture. The container opened up on Windows, linux can only load the same class of operating system images and applications.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a method and a credit server for compatibly running different service systems in a Linux environment, which can adapt to various wireless channels, thereby reducing the complexity of data transmission and reception.
The embodiment of the invention provides a method for compatibly running different service systems in a Linux environment, which comprises the following steps:
creating a kernel thread for each virtual CPU by using a process management mechanism of a Linux kernel, distributing a virtual register and a virtual memory, and mapping the virtual register and the virtual memory to a physical register and a physical memory by using a hardware virtualization technology so as to create a virtual CPU;
acquiring a target instruction sent by a service system; the service system is deployed in OS virtual operation containers with different architectures;
and translating the target instruction according to the current instruction translation set of the actual CPU to generate a local machine instruction, and executing the local machine instruction by the virtual CPU.
Preferably, when the virtual CPU executes an instruction, checking whether the instruction requires a privilege level transition;
if necessary, switching the virtual CPU to the VMX Root mode, executing privilege level conversion, and switching the virtual CPU back to the VMX Non-Root mode after the conversion is completed;
if not, the instruction is directly executed.
Preferably, when the instruction is an access memory instruction, the virtual CPU converts a virtual address to a physical address. Mapping the virtual address to a physical address by using a page table mapping mechanism;
when the instruction is an I/O request instruction, the virtual CPU converts the virtual I/O request into a physical I/O request, the physical I/O request is sent to the physical device, and then the response of the device is converted into a virtual I/O response.
Preferably, the OS virtual run container runs containers of different architectures on the same physical host based on heterogeneous container technology; wherein, specifically:
firstly, converting binary files of OS virtual operation containers with different architectures into binary files with local architectures; binary conversion is divided into two phases: the method comprises the steps of static conversion and dynamic conversion, wherein the static conversion converts a binary file of a container into a binary file of a local architecture, registers the binary file of the container into a kernel, and when the binary file of the container is executed, the kernel automatically calls a provided binary converter to convert the binary file of the container into the binary file of the local architecture; dynamic conversion converts the system call of the container into the system call of the local architecture, and the dynamic conversion can simulate various different system calls, including file operation, network operation and process management;
then, using virtualization technology to virtualize the OS virtual running containers with different architectures into containers with local architectures; virtualization is divided into two phases: virtualization and simulation; virtualizing the hardware resources of the container into hardware resources of a local architecture, virtualizing the hardware resources of the container into hardware resources of the local architecture, and mapping the hardware resources of the container onto the hardware resources of the local architecture; the simulation simulates a system call of the container to a system call of the local architecture.
Preferably, translation of the CPU instruction set includes instruction translation and code generation;
the instruction translation is divided into two stages of decoding and translation;
the decoding stage decodes the target instruction into an internal representation form, and decodes the target instruction into an LLVM IR form by using a decoder provided by the LLVM module; the translation stage utilizes a JIT compiler provided by the LLVM module to compile LLVM IR;
the code generation comprises two stages of basic block generation and global optimization;
the basic block generation stage utilizes a basic block generator provided by the LLVM module to convert the compiled LLVM IR into a basic block;
the global optimization phase will optimize the basic blocks into local machine instructions using a global optimizer provided by the LLVM module.
Preferably, the method further comprises:
caching the translated local machine instruction into a memory by using a cache manager provided by the LLVM module, so that when the same instruction is executed, the translated code is directly read from the cache, and repeated translation is avoided;
non-corresponding instructions are converted into sequences of corresponding instructions using macro-replacement techniques.
Preferably, the method further comprises:
the transmission of information and data between different operating systems is realized through heterogeneous operating system buses, so that the interoperability between heterogeneous systems is realized; wherein:
messaging is divided into two phases: transmitting and receiving;
the sending phase sends the message from one operating system to a heterogeneous operating system bus, which sends the message from one operating system to another operating system; the heterogeneous operating system bus is a virtual bus, and realizes message transmission through a shared memory, a network and a file;
the receiving stage receives the message from the heterogeneous operating system bus to another operating system;
data transmission is divided into two phases: transmitting and receiving.
The data is sent from an operating system to a heterogeneous operating system bus in the sending stage, and the heterogeneous operating system bus realizes data transmission by using a shared memory, a network and a file or writes the data into the shared memory, sends a network data packet and writes the data into the file to realize data transmission;
the receiving stage receives data from the heterogeneous operating system bus to another operating system as a receiving;
the received operating system receives and processes data or reads data from the shared memory, receives network data packets and reads files to realize data transmission by utilizing an interface provided by a protocol.
Preferably, the method further comprises:
virtualizing physical computing resources into a plurality of logical computing resources using a virtualization technique;
wherein, the virtualization is divided into two stages of virtualization and isolation;
the virtualization stage virtualizes the physical computing resources into a plurality of logical computing resources;
the isolation phase isolates the logical computing resources from each other and prevents interference between different users or applications.
Preferably, the method further comprises:
and realizing data interaction with the USB equipment through a localized bus protocol of the USB equipment.
Preferably, the method further comprises:
mapping Windows APIs called by the Windows application program to corresponding APIs on the Linux system by using an API mapping technology;
the API mapping is divided into two stages of API analysis and API mapping;
the API analysis stage analyzes the Windows API called by the Windows application program. Analyzing Windows APIs called by the Windows application program by utilizing two modes of static analysis and dynamic analysis; the static analysis utilizes a disassembler to analyze binary codes of the Windows application program and find out the called Windows API; the dynamic analysis utilizes a debugger to analyze the running process of the Windows application program and find out the called Windows API;
the API mapping stage maps Windows APIs to corresponding APIs on the Linux system;
converting binary codes of Windows application programs into executable codes on a Linux system by using a code conversion technology; transcoding is divided into two phases: code loading and code conversion;
the code loading utilizes two modes of ELF format and PE format to load binary codes of Windows application programs into a memory;
transcoding utilizes dynamic binary conversion techniques to convert binary code of Windows applications into executable code on Linux systems.
The embodiment of the invention also provides a credit server which comprises a processor and a memory, wherein the memory stores a computer program which can be executed by the processor to realize the method for compatibly running different service systems in the Linux environment.
The embodiment of the invention organically combines the virtual machine and the container technology to generate the heterogeneous system container, breaks through the difficulty of fusion compatible application of the communication system and the traditional x86 system, and has the following advantages:
1. translation of a CPU instruction set and macro substitution of non-corresponding instructions are carried out, so that the universality of various types of CPU instructions is solved;
2. through different operating system bus protocols, information flows can be interconnected and intercommunicated between each operating system and equipment;
3. the virtual running container of the OS provides a safe and effective encryption and decryption mechanism for the complete compatibility of the service system.
Drawings
FIG. 1 is a flow chart of a method for compatibly running different service systems in a Linux environment according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of the architecture of a credit server;
FIG. 3 is a schematic diagram of data interaction with a USB device through a USB device localized bus protocol in accordance with an embodiment of the present invention.
Detailed Description
The present invention will be further described with reference to specific examples and drawings.
Referring to fig. 1, a first embodiment of the present invention provides a method for compatibly running different service systems in a Linux environment, which may be performed by a credit server.
In this embodiment, the credit server includes a credit CPU and a credit operating system, where the credit CPU may be a CPU such as a Loongson, hua Cheng, feiteng, mega core, sea light, and the like. The trusted operating system may be a kylin server operating system, a unified server operating system, etc., and the present invention is not particularly limited.
The created CPU and the created operation system can be combined arbitrarily according to actual scenes and needs, and the schemes are all within the protection scope of the invention.
The general architecture of the credit server is shown in fig. 2, and the architecture can simulate an operation environment and a possible RDP/VNC protocol according to the operation environment requirement of a service system, so as to achieve the purpose of completely and compatibly operating the service systems of different operation systems in a Linux environment. For example, it is possible to realize:
1. the Server-side business system developed based on the Windows Server of various versions is wholly and quickly migrated to a credit-creating Server (physical Server, cloud Server, server cluster) for deployment and operation;
2. based on server-side business systems developed by various versions of Linux (including CentOS), the whole business system is quickly migrated to a credit server (physical server, cloud server and server cluster) for deployment and operation.
The working principle of the trafficking server of the present embodiment is described in detail below.
S101, creating a kernel thread for each virtual CPU by using a process management mechanism of a Linux kernel, distributing a virtual register and a virtual memory, and mapping the virtual register and the virtual memory to a physical register and a physical memory by using a hardware virtualization technology, thereby creating the virtual CPU.
In the present embodiment, first, the virtualization of the CPU is required. The virtual CPU implementation principle mainly involves two aspects: creation and management of virtual CPUs and execution of virtual CPUs.
Creation of virtual CPU:
first, a kernel thread is created for each virtual CPU using the process management mechanism of the Linux kernel. Each kernel thread runs in VMX Non-Root mode, it is responsible for executing virtual CPU instructions, and then allocates virtual registers and virtual memory for each virtual CPU.
Then, virtual registers and virtual memories are mapped to physical registers and physical memories by using a hardware virtualization technology, thereby realizing the creation and management of virtual CPUs.
Wherein the virtual registers include general purpose registers, control registers, segment registers, and the like. The virtual memory includes code segments, data segments, stack segments, and the like.
The virtual memory implementation principle mainly relates to management of virtual address space and page table mapping mechanism.
Management of virtual address space utilizes hardware virtualization techniques to divide physical memory into two modes: guest Physical Address (GPA) mode and Host Physical Address (HPA) mode.
In GPA mode, it is possible to access its own virtual address space.
In HPA mode, a physical address space may be accessed.
And an independent virtual address space is allocated by using a process management mechanism of the Linux kernel.
The virtual address space includes code segments, data segments, stack segments, and the like.
The page table mapping mechanism is mainly used to map virtual addresses to physical addresses.
The page table mapping mechanism includes two levels: a primary page table and a secondary page table.
The primary page table is a fixed size page table for mapping the upper bits of the virtual address to the secondary page table. The second page table is a variable size page table for mapping the low order bits of the virtual address to the physical address. When the virtual CPU executes the access instruction, the virtual address is first decomposed into a primary page table index and a secondary page table index. The physical address of the secondary page table is then obtained from the primary page table. The physical address of the physical page is then obtained from the secondary page table. And finally, adding the offset of the physical address and the virtual address of the physical page to obtain the physical address.
Execution for virtual CPU:
when a virtual CPU executes an instruction, it is first checked whether the instruction requires a privilege level transition. If necessary, the virtual CPU is switched to VMX Root mode, and privilege level transition is performed. After the conversion is completed, the virtual CPU is switched back to the VMX Non-Root mode to continue executing the instruction. If not, the instruction is directly executed.
When the virtual CPU executes the access memory instruction, the virtual address is converted into a physical address. Virtual addresses are mapped to physical addresses using a page table mapping mechanism.
When the virtual CPU performs an I/O operation, the virtual I/O request is converted to a physical I/O request. The physical I/O request is sent to the physical device and then the device's response is converted to a virtual I/O response.
Device simulation is required before performing I/O operations, and the device simulation function is used to simulate various I/O devices, such as a network card, a disk, a USB, and the like. The virtualized I/O request is converted into the I/O request of the physical host, thereby realizing the device virtualization.
The device simulation function includes two parts: device simulator and device driver. The device simulator simulates the hardware behavior of the device, such as receiving and transmitting data packets, reading and writing magnetic disks, and the like. The device driver is responsible for converting the I/O requests into operations of the device simulator.
For example, when a network packet is virtually sent out, the packet is first copied from the virtual address space to the physical address space. And then the data packet is sent to the device simulator of the virtual network card. The device simulator sends the data packet to the physical network card, and then returns the received data packet.
When accessing the disk, the virtual disk request is converted to a physical disk request. The virtual disk request is converted to a physical disk request using a block device driver. The block device driver sends a physical disk request to the physical disk and then returns the received data.
S102, acquiring a target instruction sent by a service system; the service system is deployed in OS virtual operation containers with different architectures.
In this embodiment, the service system may be a Server-side service system developed based on various versions of Windows Server, or may be a Server-side service system developed based on various versions of Linux (including CentOS), which is not particularly limited in the present invention.
In this embodiment, the service system is deployed in an OS virtual operation container of different architecture, where the OS virtual operation container operates containers of different architecture on the same physical host based on heterogeneous container technology; specifically:
first, the binaries of the OS virtual execution containers of different architectures are converted into binaries of local architectures.
Wherein the binary conversion is divided into two phases: the method comprises the steps of static conversion and dynamic conversion, wherein the static conversion converts a binary file of a container into a binary file of a local architecture, registers the binary file of the container into a kernel, and when the binary file of the container is executed, the kernel automatically calls a provided binary converter to convert the binary file of the container into the binary file of the local architecture; dynamic conversion converts the system call of the container into the system call of the local architecture, and the dynamic conversion can simulate various different system calls, including file operation, network operation and process management;
then, using virtualization technology to virtualize the OS virtual running containers with different architectures into containers with local architectures; virtualization is divided into two phases: virtualization and simulation; virtualizing the hardware resources of the container into hardware resources of a local architecture, virtualizing the hardware resources of the container into hardware resources of the local architecture, and mapping the hardware resources of the container onto the hardware resources of the local architecture; the simulation simulates a system call of the container to a system call of the local architecture.
S103, translating the target instruction according to the current instruction translation set of the actual CPU to generate a local machine instruction, and executing the local machine instruction by the virtual CPU.
The signal creation CPU of the embodiment may be a Loongson CPU, huazhi CPU, feiteng CPU, mega core CPU, maritime CPU, etc., so that translation of a CPU instruction set is required before instruction execution, and the translation includes instruction translation and code generation;
the instruction translation is divided into two stages of decoding and translation;
the decode stage decodes the target instruction into an internal representation, and decodes the target instruction into LLVM IR (Intermediate Representation) form using a decoder provided by the LLVM module; the translation stage compiles LLVM IR using a JIT compiler provided by the LLVM module. The JIT compiler may compile LLVM IR into a variety of different sets of machine instructions, such as x86, ARM, MIPS, and the like.
The code generation comprises two stages of basic block generation and global optimization;
the basic block generation phase converts LLVM IR into basic blocks. A basic block is a set of consecutive instructions, where only the first instruction may be a jump instruction. LLVM IR is converted into basic blocks using a basic block generator provided by the LLVM module.
The global optimization phase optimizes the basic blocks to local machine instructions. The basic blocks are optimized to local machine instructions using a global optimizer provided by the LLVM module. The global optimizer may perform various optimizations on the basic blocks, such as constant propagation, dead code elimination, loop unrolling, etc.
Particularly, after the local machine instruction is generated, the translated local machine instruction can be cached in the memory through a cache manager provided by the LLVM module, so that when the same instruction is executed, the translated code is directly read from the cache, and repeated translation is avoided;
in particular, macro-replacement techniques may also be utilized to convert non-corresponding instructions into sequences of corresponding instructions.
Non-corresponding instructions refer to instructions that execute in the target platform, but no corresponding instructions are on the native platform.
Macros for non-corresponding instructions are defined using a macro definition technique. Macro definition is a text replacement technique that can define a macro as a text.
For example, a macro named "do_non_corrushing_instruction" may be defined for emulating a non-corresponding instruction. The macro may contain a sequence of multiple corresponding instructions, thereby enabling simulation of non-corresponding instructions.
The sequence of corresponding instructions is replaced with non-corresponding instructions using macro replacement techniques.
For example, when the target platform executes a non-corresponding instruction, a "do_non_corrushing_instrucing" macro will be called. The macro will be replaced with a sequence of corresponding instructions, thereby enabling emulation of non-corresponding instructions.
Preferably, the method further comprises:
and the inter-operability among heterogeneous systems is realized by transmitting messages and data among different operating systems through heterogeneous operating system buses.
Wherein the heterogeneous operating system bus protocol is a protocol for communication between different operating systems. Protocols may pass messages and data between different operating systems, thereby enabling interoperability between heterogeneous systems.
Messaging is divided into two phases: transmitting and receiving;
the sending phase sends the message from one operating system to a heterogeneous operating system bus, which sends the message from one operating system to another operating system; the heterogeneous operating system bus is a virtual bus, and realizes message transmission through a shared memory, a network and a file;
the receiving stage receives the message from the heterogeneous operating system bus to another operating system;
data transmission is divided into two phases: transmitting and receiving.
The data is sent from an operating system to a heterogeneous operating system bus in the sending stage, and the heterogeneous operating system bus realizes data transmission by using a shared memory, a network and a file or writes the data into the shared memory, sends a network data packet and writes the data into the file to realize data transmission;
the receiving stage receives data from the heterogeneous operating system bus to another operating system as a receiving;
the received operating system receives and processes data or reads data from the shared memory, receives network data packets and reads files to realize data transmission by utilizing an interface provided by a protocol.
Preferably, the method further comprises:
the physical computing resources are virtualized into a plurality of logical computing resources using virtualization techniques.
Wherein the shared computing resource technique utilizes a virtualization technique to virtualize physical computing resources into a plurality of logical computing resources.
The virtualization is divided into two stages of virtualization and isolation;
virtualization virtualizes physical computing resources into multiple logical computing resources. Physical computing resources (e.g., CPUs, memory, disks, etc.) are virtualized into multiple logical computing resources.
Isolation isolates logical computing resources from each other and prevents interference between different users or applications.
Isolation logical computing resources may be isolated using container technology or virtual machine technology.
The container technology may isolate logical computing resources using namespaces and control group functions provided by the Linux kernel. Virtual machine technology may isolate logical computing resources using isolation functionality provided by the VMM.
Preferably, the method further comprises:
and realizing data interaction with the USB equipment through a localized bus protocol of the USB equipment.
In the actual working process, communication with an external USB device is inevitably required. In the layered structure of the USB system of the present embodiment, it can be considered that data transmission between the originality server and the USB device is directly performed between the originality server software and the respective endpoints of the USB device, and the connection therebetween is referred to as a "pipe".
The pipe is an abstraction of the communication flow between the creation server and the USB device, which means that there is a logical data transfer between the data buffer of the creation server and the end point of the USB device, whereas the actual data transfer is done by the USB bus interface layer. Endpoints in the pipe and USB device are in one-to-one correspondence.
The pipes are further divided into a streaming pipe and a message pipe according to the different USB data transmission types. Wherein the flow pipeline is unidirectional, and data transmitted in the flow pipeline does not have a USB-defined structure and can be used for block transmission, synchronous transmission and interrupt transmission; while the message pipe is bi-directional, the data transmitted in the message pipe has a USB-defined structure, only for control transmission. The default control pipe implemented by endpoint No. 0 of the USB device is a message pipe.
The USB universal port mapping is similar to software of a C/S structure, and mainly consists of 3 parts, namely a USB universal mapping program client, a USB universal mapping program server and a virtual bus. The USB universal mapping program client is arranged on the credit server platform, mainly completes detection of USB equipment and driving of the USB equipment in the credit operating system, and any USB equipment inserted into the credit server can be initialized by the driving.
The working principle for realizing the USB universal port mapping mainly comprises 3 steps:
(1) the USB device driver which is needed to be used by the user on the credit creation platform is installed on the credit creation server.
(2) When a user inserts USB equipment on the credit creation platform, the USB universal mapping driver client automatically reports the manufacturer ID and the product ID of the equipment to the USB universal mapping program server on the credit creation server; and then the virtual bus is added on the server to enumerate the drive of the device and realize plug and play automatic loading.
(3) Through the USB universal mapping program, a local USB device can be used in the credit server, and the USB device uses the same driver (USB universal mapping program client) in a credit operation system on a credit platform.
Specific implementation of port mapping
Implementation at USB universal mapping program client
The key of the universal USB driver is that the concentrator directly loads the driver in the self-initialized driver chain table when the concentrator driver enumerates USB devices.
The method comprises the following specific steps:
(1) all endpoints of the interface are probed in the drive probing function and a buffer is established for each endpoint.
(2) Thread 1 is set up to send UDP requests to acquire IP 1 time per second.
(3) The establishing thread 2 receives the UDP response of the acquired IP and blocks the reading; when the UDP response of IP is received, thread 1 is cancelled so that it always sends out to occupy network bandwidth. Thread 2 configures the IP with the currently received IP address and then proceeds back to block reads.
(4) After the IP is determined, a plurality of sub-threads are started, and each sub-thread corresponds to one USB interface endpoint.
(5) The important USB control endpoint 0 default control, because it is a bi-directional pipe, requires two handles to be established, one IN and one OUT.
(6) Batch transfer, synchronous transfer and interrupt transfer are all unidirectional pipes, and only one handle needs to be established. If it is IN, the data of the read socket is blocked. If the data arrives, the protocol is analyzed, if the data arrives, a defined read request, the number of bytes to be read and the like, the read request is sent to the USB device, the data of the endpoint of the USB device is read, and then the data is returned and written to the socket. If it is OUT, the data of the read socket is still blocked. If the data comes, analyzing the protocol, if the data comes, the data is read from the socket immediately after the data is returned and sent to the USB device if the data comes, and if the data comes, the data is the defined write request, the number of bytes to be written, and the like.
Implementation on USB universal mapping program server
The USB universal mapping program server side maps the USB port of the credit operating system and simultaneously communicates with the virtual bus in the credit server, thereby playing the roles of intermediate bridge and protocol conversion. The protocol conversion mainly carries out data packaging and unpacking, interprets network data and virtual bus driven data packets, and realizes the communication between the network and the virtual bus.
The USB pipe established during the USB device configuration process also maintains its characteristics in the network. The USB device has a plurality of endpoints, and can use a plurality of pipelines when communicating with a host, and data transmission among the pipelines is independent.
Preferably, the method further comprises:
mapping Windows APIs called by the Windows application program to corresponding APIs on the Linux system by using an API mapping technology;
the API mapping is divided into two stages of API analysis and API mapping;
the API analysis stage analyzes the Windows API called by the Windows application program. Analyzing Windows APIs called by the Windows application program by utilizing two modes of static analysis and dynamic analysis; the static analysis utilizes a disassembler to analyze binary codes of the Windows application program and find out the called Windows API; the dynamic analysis utilizes a debugger to analyze the running process of the Windows application program and find out the called Windows API;
the API mapping stage maps Windows APIs to corresponding APIs on the Linux system;
converting binary codes of Windows application programs into executable codes on a Linux system by using a code conversion technology; transcoding is divided into two phases: code loading and code conversion;
the code loading utilizes two modes of ELF format and PE format to load binary codes of Windows application programs into a memory;
transcoding utilizes dynamic binary conversion techniques to convert binary code of Windows applications into executable code on Linux systems.
The second embodiment of the present invention also provides a credit server, which includes a processor and a memory, where the memory stores a computer program, and the computer program can be executed by the processor, so as to implement the method for compatibly running different service systems in the Linux environment.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus and method embodiments described above are merely illustrative, for example, flow diagrams and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present invention may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, an electronic device, a network device, or the like) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for compatibly operating different service systems in a Linux environment, comprising:
creating a kernel thread for each virtual CPU by using a process management mechanism of a Linux kernel, distributing a virtual register and a virtual memory, and mapping the virtual register and the virtual memory to a physical register and a physical memory by using a hardware virtualization technology so as to create a virtual CPU;
acquiring a target instruction sent by a service system; the service system is deployed in OS virtual operation containers with different architectures;
and translating the target instruction according to the current instruction translation set of the actual CPU to generate a local machine instruction, and executing the local machine instruction by the virtual CPU.
2. The method of claim 1, wherein the method is used for operating different service systems in a Linux environment,
checking whether the instruction requires privilege level transition when the virtual CPU executes the instruction;
if necessary, switching the virtual CPU to the VMX Root mode, executing privilege level conversion, and switching the virtual CPU back to the VMX Non-Root mode after the conversion is completed;
if not, the instruction is directly executed.
3. The method of claim 1, wherein the method is used for operating different service systems in a Linux environment,
when the instruction is an access memory instruction, the virtual CPU converts a virtual address into a physical address. Mapping the virtual address to a physical address by using a page table mapping mechanism;
when the instruction is an I/O request instruction, the virtual CPU converts the virtual I/O request into a physical I/O request, the physical I/O request is sent to the physical device, and then the response of the device is converted into a virtual I/O response.
4. The method for compatibly running different service systems in Linux environment according to claim 1, wherein the OS virtual running container runs containers with different architectures on the same physical host based on heterogeneous container technology; wherein, specifically:
firstly, converting binary files of OS virtual operation containers with different architectures into binary files with local architectures; binary conversion is divided into two phases: the method comprises the steps of static conversion and dynamic conversion, wherein the static conversion converts a binary file of a container into a binary file of a local architecture, registers the binary file of the container into a kernel, and when the binary file of the container is executed, the kernel automatically calls a provided binary converter to convert the binary file of the container into the binary file of the local architecture; dynamic conversion converts the system call of the container into the system call of the local architecture, and the dynamic conversion can simulate various different system calls, including file operation, network operation and process management;
then, using virtualization technology to virtualize the OS virtual running containers with different architectures into containers with local architectures; virtualization is divided into two phases: virtualization and simulation; virtualizing the hardware resources of the container into hardware resources of a local architecture, virtualizing the hardware resources of the container into hardware resources of the local architecture, and mapping the hardware resources of the container onto the hardware resources of the local architecture; the simulation simulates a system call of the container to a system call of the local architecture.
5. The method of claim 1, wherein the translation of the CPU instruction set includes instruction translation and code generation;
the instruction translation is divided into two stages of decoding and translation;
the decoding stage decodes the target instruction into an internal representation form, and decodes the target instruction into an LLVM IR form by using a decoder provided by the LLVM module; the translation stage utilizes a JIT compiler provided by the LLVM module to compile LLVM IR;
the code generation comprises two stages of basic block generation and global optimization;
the basic block generation stage utilizes a basic block generator provided by the LLVM module to convert the compiled LLVM IR into a basic block;
the global optimization phase will optimize the basic blocks into local machine instructions using a global optimizer provided by the LLVM module.
6. The method of compatibly running different service systems in a Linux environment according to claim 5, further comprising:
caching the translated local machine instruction into a memory by using a cache manager provided by the LLVM module, so that when the same instruction is executed, the translated code is directly read from the cache, and repeated translation is avoided;
non-corresponding instructions are converted into sequences of corresponding instructions using macro-replacement techniques.
7. The method of compatibly running different service systems in a Linux environment according to claim 1, further comprising:
the transmission of information and data between different operating systems is realized through heterogeneous operating system buses, so that the interoperability between heterogeneous systems is realized; wherein:
messaging is divided into two phases: transmitting and receiving;
the sending phase sends the message from one operating system to a heterogeneous operating system bus, which sends the message from one operating system to another operating system; the heterogeneous operating system bus is a virtual bus, and realizes message transmission through a shared memory, a network and a file;
the receiving stage receives the message from the heterogeneous operating system bus to another operating system;
data transmission is divided into two phases: transmitting and receiving.
The data is sent from an operating system to a heterogeneous operating system bus in the sending stage, and the heterogeneous operating system bus realizes data transmission by using a shared memory, a network and a file or writes the data into the shared memory, sends a network data packet and writes the data into the file to realize data transmission;
the receiving stage receives data from the heterogeneous operating system bus to another operating system as a receiving;
the received operating system receives and processes data or reads data from the shared memory, receives network data packets and reads files to realize data transmission by utilizing an interface provided by a protocol.
8. The method of compatibly running different service systems in a Linux environment according to claim 1, further comprising:
virtualizing physical computing resources into a plurality of logical computing resources using a virtualization technique;
wherein, the virtualization is divided into two stages of virtualization and isolation;
the virtualization stage virtualizes the physical computing resources into a plurality of logical computing resources;
the isolation phase isolates the logical computing resources from each other and prevents interference between different users or applications.
9. The method of compatibly running different service systems in a Linux environment according to claim 1, further comprising:
mapping Windows APIs called by the Windows application program to corresponding APIs on the Linux system by using an API mapping technology;
the API mapping is divided into two stages of API analysis and API mapping;
the API analysis stage analyzes the Windows API called by the Windows application program. Analyzing Windows APIs called by the Windows application program by utilizing two modes of static analysis and dynamic analysis; the static analysis utilizes a disassembler to analyze binary codes of the Windows application program and find out the called Windows API; the dynamic analysis utilizes a debugger to analyze the running process of the Windows application program and find out the called Windows API;
the API mapping stage maps Windows APIs to corresponding APIs on the Linux system;
converting binary codes of Windows application programs into executable codes on a Linux system by using a code conversion technology; transcoding is divided into two phases: code loading and code conversion;
the code loading utilizes two modes of ELF format and PE format to load binary codes of Windows application programs into a memory;
transcoding utilizes dynamic binary conversion techniques to convert binary code of Windows applications into executable code on Linux systems.
10. A trafficking server comprising a processor and a memory, the memory having stored therein a computer program executable by the processor to implement a method of compatibly running a business system of different operating systems in a Linux environment according to any of claims 1 to 9.
CN202311180975.2A 2023-09-13 2023-09-13 Method for compatibly running different service systems in Linux environment and credit creation server Pending CN117369993A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311180975.2A CN117369993A (en) 2023-09-13 2023-09-13 Method for compatibly running different service systems in Linux environment and credit creation server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311180975.2A CN117369993A (en) 2023-09-13 2023-09-13 Method for compatibly running different service systems in Linux environment and credit creation server

Publications (1)

Publication Number Publication Date
CN117369993A true CN117369993A (en) 2024-01-09

Family

ID=89393722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311180975.2A Pending CN117369993A (en) 2023-09-13 2023-09-13 Method for compatibly running different service systems in Linux environment and credit creation server

Country Status (1)

Country Link
CN (1) CN117369993A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117687626A (en) * 2024-02-04 2024-03-12 双一力(宁波)电池有限公司 Host computer and main program matching system and method
CN118035992A (en) * 2024-04-12 2024-05-14 浪潮云信息技术股份公司 Memory security scanning method based on credit operation system, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117687626A (en) * 2024-02-04 2024-03-12 双一力(宁波)电池有限公司 Host computer and main program matching system and method
CN117687626B (en) * 2024-02-04 2024-05-03 双一力(宁波)电池有限公司 Host computer and main program matching system and method
CN118035992A (en) * 2024-04-12 2024-05-14 浪潮云信息技术股份公司 Memory security scanning method based on credit operation system, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US7496495B2 (en) Virtual operating system device communication relying on memory access violations
Bugnion et al. Hardware and software support for virtualization
JP5139975B2 (en) Function level just-in-time conversion engine with multiple path optimizations
US7558723B2 (en) Systems and methods for bimodal device virtualization of actual and idealized hardware-based devices
US7478373B2 (en) Kernel emulator for non-native program modules
US8274518B2 (en) Systems and methods for virtualizing graphics subsystems
JP5608243B2 (en) Method and apparatus for performing I / O processing in a virtual environment
CN117369993A (en) Method for compatibly running different service systems in Linux environment and credit creation server
KR102013002B1 (en) Para-virtualized high-performance computing and gdi acceleration
US20180074843A1 (en) System, method, and computer program product for linking devices for coordinated operation
CN102968331B (en) A kind of virtual machine management system and file access method thereof
CA2462563C (en) Data alignment between native and non-native shared data structures
US20070016895A1 (en) Selective omission of endian translation to enhance emulator performance
US7069412B2 (en) Method of using a plurality of virtual memory spaces for providing efficient binary compatibility between a plurality of source architectures and a single target architecture
KR20140005280A (en) Virtual disk storage techniques
CN103034524A (en) Paravirtualized virtual GPU
US8631423B1 (en) Translating input/output calls in a mixed virtualization environment
CN103793260A (en) Platform virtualization system
KR101716715B1 (en) Method and apparatus for handling network I/O apparatus virtualization
Hale et al. Electrical Engineering and Computer Science Department
CN105556473A (en) I/O task processing method, device and system
Dall et al. Optimizing the Design and Implementation of the Linux {ARM} Hypervisor
TWI603199B (en) Capability based device driver framework
CN109656675A (en) Bus apparatus, computer equipment and the method for realizing physical host cloud storage
Andrus et al. Binary compatible graphics support in Android for running iOS apps

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination