CN104104736A - Cloud server and use method thereof - Google Patents

Cloud server and use method thereof Download PDF

Info

Publication number
CN104104736A
CN104104736A CN201410382779.8A CN201410382779A CN104104736A CN 104104736 A CN104104736 A CN 104104736A CN 201410382779 A CN201410382779 A CN 201410382779A CN 104104736 A CN104104736 A CN 104104736A
Authority
CN
China
Prior art keywords
network
cloud server
processor
interference networks
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410382779.8A
Other languages
Chinese (zh)
Inventor
杨晓君
叶胜兰
刘兴奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dawning Information Industry Beijing Co Ltd
Original Assignee
Dawning Information Industry Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dawning Information Industry Beijing Co Ltd filed Critical Dawning Information Industry Beijing Co Ltd
Priority to CN201410382779.8A priority Critical patent/CN104104736A/en
Publication of CN104104736A publication Critical patent/CN104104736A/en
Pending legal-status Critical Current

Links

Landscapes

  • Computer And Data Communications (AREA)

Abstract

The invention provides a cloud server and a use method thereof. The cloud server consists of a plurality of computation nodes, each computation resource is distributed on each computation node, the computation resources comprise an uplink network interface, an interconnection interface, storage equipment and a processor connected with an internal storage, each computation node constructs an interconnection network through the interconnection interface, data communication is performed on the interconnection network according to a network communication protocol through the interconnection interface, and Ethernet messages, storage messages and management messages are transmitted on the interconnection network. The high-performance interconnection network is built inside the cloud server to support sharing of resources, and resources inside the cloud server can be communicated through the interconnection network to realize resource sharing; moreover, the built interconnection network is only limited inside the server, and is invisible to server applications, namely, no influence is caused on the compatibility of an operating system and application software.

Description

A kind of Cloud Server and using method thereof
Technical field
The present invention relates to Cloud Server technical field, relate in particular to a kind of Cloud Server and using method thereof.
Background technology
At present, the Cloud Server in cloud computing system is to adopt subregion Intel Virtualization Technology to realize sharing computational resource by the mode of software building virtual machine.Cloud Server computational resource comprises processor, internal memory, network and storage resources etc.Fig. 1 shows the virtual structure of subregion that traditional server adopts, and as shown in the figure, the system-wide computational resource of Cloud Server is shared in processor, internal memory, network and the storage of each virtual machine (VM, Virtual Machine).
Adopt subregion Intel Virtualization Technology to realize the high performance traditional server of the serious dependence of the shared technology of Cloud Server computational resource by the mode of software building virtual machine.And the various drawbacks of this traditional server of generally being applied in current cloud computing become more general in the cloud epoch, high energy consumption, expensive, take up room etc.
For this reason, the thing followed is to follow the appearance of the microserver of lightweight processor appearance.Microserver is because adopt lightweight processor as calculating core, and server energy consumption can be low, cost can be low, cost performance can be high, power dissipation ratio of performance can be high, and operation cost, the maintenance cost of microserver also can be very low.Microserver must become a Main Trends of The Development of cloud computing epoch server.
The deficiencies in the prior art:
Same because the processing core of its lightweight of microserver, make it performance with aspect configuration cannot aspect virtual machine support compared with traditional high-performance server.That is to say, on microserver, build virtual machine and cannot effectively realize sharing of computational resource.
Summary of the invention
The present invention is directed to the problems referred to above, proposed a kind of Cloud Server and using method thereof, solved the technical problem that cannot effectively realize computing resource sharing in prior art.
The embodiment of the present invention provides a kind of Cloud Server, is made up of some computing nodes, and each computational resource is distributed on each computing node, and computational resource comprises:
Uplink network interface, for being connected with grid, realizes external network access;
Interconnect interface, for being connected with other nodes and forming interference networks with other nodes;
Memory device, for storing data;
Be connected with the processor of internal memory, for carrying out data processing;
Each computing node forms interference networks by interconnect interface, and is carried out data communication and on interference networks, transmitted Ethernet message, stored messages and administrative message by interconnect interface by network communication protocol on interference networks.
The embodiment of the present invention also provides a kind of using method of above-mentioned Cloud Server, comprises the steps:
Cloud Server receives the resource request of application load;
On interference networks, press high performance network communication protocol transmission data, each computing node forms interference networks by interconnect interface, and carry out data communication and on interference networks, transmit Ethernet message, stored messages and administrative message by interconnect interface by network communication protocol on interference networks, and press the message of Ethernet communication protocol switching processor and processor, processor and external network, stored messages for realizing the storage demand of processor on memory device, and administrative message is for transmission system management information and working state of system.
Beneficial effect:
The embodiment of the present invention builds interference networks at the inner each computing node of Cloud Server by interconnect interface, for supporting sharing of resource, the resource of Cloud Server inside can communicate by this network, each computing node is by being carried out data communication and transmitted Ethernet message, stored messages and administrative message on interference networks by interconnect interface by network communication protocol on interference networks, thereby effectively realized sharing of computational resource; And because the constructed interference networks of the embodiment of the present invention only limit to server inside, to server, application is sightless, also, can not exert an influence to the compatibility of operating system, application software.
Brief description of the drawings
Specific embodiments of the invention are described below with reference to accompanying drawings, wherein:
Fig. 1 is the virtual structural representation of subregion that traditional server adopts;
Fig. 2 is the interconnection network architecture schematic diagram of embodiment of the present invention Cloud Server;
Fig. 3 is the Distributed sharing structural representation of embodiment of the present invention Cloud Server;
Fig. 4 is the centralized shared structure schematic diagram of embodiment of the present invention Cloud Server;
Fig. 5 is the schematic flow sheet that the using method of embodiment of the present invention Cloud Server is implemented;
Fig. 6 is embodiment of the present invention processor and the virtual schematic diagram of memory source.
Embodiment
In order to make technical scheme of the present invention and advantage clearer, below in conjunction with accompanying drawing, exemplary embodiment of the present invention is described in more detail, obviously, described embodiment is only a part of embodiment of the present invention, instead of all embodiment's is exhaustive.
The embodiment of the present invention provides a kind of Cloud Server and using method thereof, describes below.
The Cloud Server that the embodiment of the present invention provides is made up of some computing nodes, and as shown in Figure 2, each computational resource is distributed on each computing node, and computational resource can comprise:
Uplink network interface, for being connected with grid, realizes external network access;
Interconnect interface, for being connected with other nodes and forming interference networks with other nodes;
Memory device, for storing data;
Be connected with the processor of internal memory, for carrying out data processing;
Each computing node forms interference networks by interconnect interface, and is carried out data communication and on interference networks, transmitted Ethernet message, stored messages and administrative message by interconnect interface by network communication protocol on interference networks.
In enforcement, each computational resource is distributed on each computing node, can be on each computing node, includes uplink network interface, interconnect interface, memory device and is connected with the processor of internal memory.
The Distributed sharing pattern of the network of Cloud Server inside and storage resources in the embodiment of the present invention, can be as shown in Figure 3.In the embodiment of the present invention, Distributed sharing pattern refers to same class computational resource, such as network, storage, be distributed on each node of Cloud Server high performance interconnect network, on each node, can include Internet resources and storage resources etc., these resources can be that local node is used or other nodes are used.
In enforcement, each computing node, in the time receiving computational resource request, preferentially uses the computational resource of this node.
Distributed sharing refers to that the resource of system is on each access point being distributed in interference networks, and this resource is local (for native processor), is also used as the remote resource of other processors.Distributed resource has local and remote dividing, and will use as far as possible local resource, as local resource must apply for using long-range (other processor) resource not in maximizing performance situation.
In enforcement, these resources can directly or indirectly access interference networks by multiple entry.
In enforcement, each computational resource is distributed on each computing node, can be on each computing node, includes one of following computational resource: uplink network interface, interconnect interface, memory device, be connected with the processor of internal memory.
Fig. 4 shows the network of Cloud Server and storage resources and accesses in isolated node mode the schematic diagram of high performance interconnect network, the resource of Cloud Server inside can access this high performance interconnect network in isolated node mode by category, interference networks can comprise network node, memory node and service node, Internet resources and/or storage resources are shared by service node, wherein:
Network node connects uplink network interface, for by Cloud Server access network;
Memory node, can include memory device, for storing data;
Service node, can comprise the processor that is connected with internal memory, for Business Processing.
Be different from for convenience resource with distributed access high performance interconnect network, the embodiment of the present invention will be called centralized shared model in the mode of isolated node access high performance interconnect network.
Centralized sharing is that resource category is concentrated and is present in system, and any processor will use these resources to reach to the destination node of specifying via interference networks, no matter reach on the nexus that resource needs how much all will go to specify, without local and remote dividing.
In enforcement, the resource of Cloud Server inside can directly access high performance interconnect network by single entrance by category.
In enforcement, the network topology form of interference networks can be straight-forward network or indirect network.
In enforcement, interference networks can be Torus network, Mesh network or fully-connected network (or being called All to All network) etc.
Interference networks can be Torus network, Mesh network or All to All network etc., and Torus network can be divided into 2D Torus and 3D Torus, and the present invention is not restricted the concrete network configuration adopting.Full-mesh topology structure namely we usually said " fully connected topology ", in totally interconnected formula network configuration, all nodes all interconnect, and each node is all responsible for all Business Processing of all users in network, after interconnecting, be just equivalent to play the double action of balanced and redundancy.
The 3D Torus interconnect architecture of high bandwidth, low time delay has good autgmentability and high network service performance, just can realize effective connection of the inner each node of Cloud Server without independent switching equipment, can realize the interconnection of tens thousand of nodes.Meet the requirement of cloud computing to computing resource sharing at aspects such as elasticity extensibility, usefulness, cost and power consumptions.Realize highly coupling between Cloud Server internal node, the reliability of inter-node communication performance and system.Availability has obtained effective guarantee, and in node scale increase situation, its usefulness is more outstanding.
In enforcement, interference networks can be high performance networks.
Fig. 5 shows the schematic flow sheet that the using method of the above-mentioned Cloud Server that the embodiment of the present invention provides is implemented, and as shown in the figure, can comprise the steps:
Step 501, Cloud Server receive the resource request of application load;
Step 502, on interference networks, press high performance network communication protocol transmission data, each computing node forms interference networks by interconnect interface, and carry out data communication and on interference networks, transmit Ethernet message, stored messages and administrative message by interconnect interface by network communication protocol on interference networks, and press the message of Ethernet communication protocol switching processor and processor, processor and external network, stored messages for realizing the storage demand of processor on memory device, and administrative message is for transmission system management information and working state of system.
The embodiment of the present invention has proposed a kind of shared platform of the Cloud Server computational resource based on high performance interconnect network, and this high performance interconnect network is shared in order to effective support computational resource.The computational resources such as processor, internal memory, network and the storage of Cloud Server inside can communicate by this network, realize resource-sharing.
In enforcement, may further include: by the processor in computational resource and the virtual physical node that turns to of memory source polymerization, processed the computation requests of Cloud Server by the dummy node of described physical node or its deploy.
In enforcement, can carry out the virtual physical node that turns to respective capabilities of polymerization to processor and memory source according to the resource request of application load.
The physical node of the embodiment of the present invention, also can be called service node, can share processor and memory source in Cloud Server.Virtual the turning to of polymerization builds oppositely virtual by software mode to processor and memory source.
Processor and memory source that Fig. 6 shows Cloud Server adopt software approach, and also structure is oppositely virtual, realizes shared schematic diagram also referred to as the virtual mode of polymerization.
Because the processor of Cloud Server inside is all minor node originally, the embodiment of the present invention can be passed through Intel Virtualization Technology, is configured to a large node, and multiple processor systems are virtual is a triangular web.The processor of large node and internal memory are shared processor and the memory source of Cloud Server machine system.The scale configuration of large node can be depending on different application load.
The embodiment of the present invention is at Cloud Server internal build high performance interconnect network, and for supporting sharing of resource, the resource of Cloud Server inside can communicate by this network, realizes resource-sharing; And the high performance interconnect network constructed due to the embodiment of the present invention only limits to server inside, to server, application is sightless, and also, high performance interconnect network can not exert an influence to the compatibility of operating system, application software.Based on Cloud Server inside high performance interconnect network and polymerization virtual realize the computational resources such as processor, internal memory, network and storage share, this shared model is not only applicable to microserver, equally also be applicable to traditional high-performance server, provide performance guarantee to sharing result.
Above embodiment is only in order to technical scheme of the present invention to be described, but not is limited.Therefore,, in the situation that not deviating from spirit of the present invention and essence thereof, those skilled in the art can make various changes, replacement and modification.Obviously, but within these changes, replacement and modification all should be covered by the protection range of the claims in the present invention.

Claims (10)

1. a Cloud Server, is characterized in that, is made up of some computing nodes, and each computational resource is distributed on each computing node, and described computational resource comprises:
Uplink network interface, for being connected with grid, realizes external network access;
Interconnect interface, for being connected with other nodes and forming interference networks with other nodes;
Memory device, for storing data;
Be connected with the processor of internal memory, for carrying out data processing;
Each computing node forms interference networks by interconnect interface, and is carried out data communication and on interference networks, transmitted Ethernet message, stored messages and administrative message by interconnect interface by network communication protocol on interference networks.
2. Cloud Server as claimed in claim 1, is characterized in that, each computational resource is distributed on each computing node, is on each computing node, includes uplink network interface, interconnect interface, memory device and is connected with the processor of internal memory.
3. Cloud Server as claimed in claim 2, is characterized in that, each computing node, in the time receiving computational resource request, preferentially uses the computational resource of this node.
4. Cloud Server as claimed in claim 1, it is characterized in that, each computational resource is distributed on each computing node, is on each computing node, includes one of following computational resource: uplink network interface, interconnect interface, memory device, be connected with the processor of internal memory.
5. Cloud Server as claimed in claim 1, is characterized in that, the network topology form of described interference networks is straight-forward network or indirect network.
6. Cloud Server as claimed in claim 1, is characterized in that, described interference networks are Torus network, Mesh network or All to All fully-connected network.
7. Cloud Server as claimed in claim 1, is characterized in that, described interference networks are high performance networks.
8. as described in as arbitrary in claim 1 to 7, a using method for Cloud Server, is characterized in that, comprises the steps:
Cloud Server receives the resource request of application load;
On interference networks, press high performance network communication protocol transmission data, each computing node forms interference networks by interconnect interface, and carry out data communication and on interference networks, transmit Ethernet message, stored messages and administrative message by interconnect interface by network communication protocol on interference networks, and press the message of Ethernet communication protocol switching processor and processor, processor and external network, stored messages for realizing the storage demand of processor on memory device, and administrative message is for transmission system management information and working state of system.
9. using method as claimed in claim 8, is characterized in that, further comprises: by the processor in computational resource and the virtual physical node that turns to of memory source polymerization, processed the computation requests of Cloud Server by the virtual machine of described physical node or its deploy.
10. using method as claimed in claim 9, is characterized in that, according to the resource request of application load, processor and memory source is carried out to the virtual physical node that turns to respective capabilities of polymerization.
CN201410382779.8A 2014-08-06 2014-08-06 Cloud server and use method thereof Pending CN104104736A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410382779.8A CN104104736A (en) 2014-08-06 2014-08-06 Cloud server and use method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410382779.8A CN104104736A (en) 2014-08-06 2014-08-06 Cloud server and use method thereof

Publications (1)

Publication Number Publication Date
CN104104736A true CN104104736A (en) 2014-10-15

Family

ID=51672535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410382779.8A Pending CN104104736A (en) 2014-08-06 2014-08-06 Cloud server and use method thereof

Country Status (1)

Country Link
CN (1) CN104104736A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105450766A (en) * 2015-12-07 2016-03-30 浪潮集团有限公司 FPGA fast time sequence convergence system
CN105530206A (en) * 2015-12-22 2016-04-27 合肥工业大学 Torus network based dual-access structures and working mode thereof
CN109561034A (en) * 2018-12-25 2019-04-02 中科曙光信息产业成都有限公司 Three-dimensional network topological structure and its routing algorithm
CN109582637A (en) * 2017-09-28 2019-04-05 韩国电子通信研究院 Network infrastructure system and using its data processing and data sharing method
US11307943B2 (en) 2017-03-21 2022-04-19 Huawei Technologies Co., Ltd. Disaster recovery deployment method, apparatus, and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102271145A (en) * 2010-06-04 2011-12-07 国云科技股份有限公司 Virtual computer cluster and enforcement method thereof
CN103092807A (en) * 2012-12-24 2013-05-08 杭州华为数字技术有限公司 Node controller, parallel computing server system and route method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102271145A (en) * 2010-06-04 2011-12-07 国云科技股份有限公司 Virtual computer cluster and enforcement method thereof
CN103092807A (en) * 2012-12-24 2013-05-08 杭州华为数字技术有限公司 Node controller, parallel computing server system and route method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105450766A (en) * 2015-12-07 2016-03-30 浪潮集团有限公司 FPGA fast time sequence convergence system
CN105530206A (en) * 2015-12-22 2016-04-27 合肥工业大学 Torus network based dual-access structures and working mode thereof
CN105530206B (en) * 2015-12-22 2019-01-29 合肥工业大学 A kind of Torus network system and its working method with double access infrastructures
US11307943B2 (en) 2017-03-21 2022-04-19 Huawei Technologies Co., Ltd. Disaster recovery deployment method, apparatus, and system
CN109582637A (en) * 2017-09-28 2019-04-05 韩国电子通信研究院 Network infrastructure system and using its data processing and data sharing method
CN109561034A (en) * 2018-12-25 2019-04-02 中科曙光信息产业成都有限公司 Three-dimensional network topological structure and its routing algorithm

Similar Documents

Publication Publication Date Title
CN102457439B (en) Virtual switching system and method of cloud computing system
CN105426245A (en) Dynamically composed compute nodes comprising disaggregated components
CN102017544B (en) Method and system for offloading network processing
CN110737508A (en) cloud container service network system based on wave cloud and implementation method
CN103353861B (en) Realize method and the device of distributed I/O resource pool
CN108293022A (en) A kind of methods, devices and systems of message transmissions
CN104104736A (en) Cloud server and use method thereof
Al-Azez et al. Virtualization framework for energy efficient IoT networks
CN113810205B (en) Service computing power information reporting and receiving method, server and data center gateway
Tseng et al. Service-oriented virtual machine placement optimization for green data center
CN110830574B (en) Method for realizing intranet load balance based on docker container
JP2016531372A (en) Memory module access method and apparatus
CN103973578A (en) Virtual machine traffic redirection method and device
US20230136615A1 (en) Virtual pools and resources using distributed networked processing units
Teyeb et al. Optimal virtual machine placement in large-scale cloud systems
Liu et al. PSNet: Reconfigurable network topology design for accelerating parameter server architecture based distributed machine learning
CN113765801B (en) Message processing method and device applied to data center, electronic equipment and medium
CN115514651B (en) Cloud edge data transmission path planning method and system based on software-defined stacked network
CN104125292A (en) Data processing device, cloud server and use method thereof
Alenazi et al. Energy-efficient distributed machine learning in cloud fog networks
CN116074160A (en) Virtual networking public network forwarding method for GPU rendering computing node cluster
CN115242597A (en) Information processing method, device and storage medium
CN104519150A (en) Network address translation port distribution method and system
CN107341057A (en) A kind of data processing method and device
Alharbi et al. Optimizing jobs' completion time in cloud systems during virtual machine placement

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20141015

RJ01 Rejection of invention patent application after publication