WO2013040942A1 - 数据中心***及装置和提供服务的方法 - Google Patents

数据中心***及装置和提供服务的方法 Download PDF

Info

Publication number
WO2013040942A1
WO2013040942A1 PCT/CN2012/078773 CN2012078773W WO2013040942A1 WO 2013040942 A1 WO2013040942 A1 WO 2013040942A1 CN 2012078773 W CN2012078773 W CN 2012078773W WO 2013040942 A1 WO2013040942 A1 WO 2013040942A1
Authority
WO
WIPO (PCT)
Prior art keywords
load balancing
network
type
balancing device
network request
Prior art date
Application number
PCT/CN2012/078773
Other languages
English (en)
French (fr)
Inventor
吴教仁
刘涛
刘宁
张�诚
傅江
Original Assignee
百度在线网络技术(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 百度在线网络技术(北京)有限公司 filed Critical 百度在线网络技术(北京)有限公司
Priority to EP12833678.1A priority Critical patent/EP2765747B1/en
Priority to US14/346,653 priority patent/US8966050B2/en
Publication of WO2013040942A1 publication Critical patent/WO2013040942A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/09Mapping addresses
    • H04L61/25Mapping addresses of the same type
    • H04L61/2503Translation of Internet protocol [IP] addresses
    • H04L61/251Translation of Internet protocol [IP] addresses between different IP versions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/09Mapping addresses
    • H04L61/25Mapping addresses of the same type
    • H04L61/2503Translation of Internet protocol [IP] addresses
    • H04L61/2521Translation architectures other than single NAT servers
    • H04L61/2528Translation at a proxy
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/45Network directories; Name-to-address mapping
    • H04L61/4505Network directories; Name-to-address mapping using standardised directories; using standardised directory access protocols
    • H04L61/4511Network directories; Name-to-address mapping using standardised directories; using standardised directory access protocols using domain name system [DNS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1023Server selection for load balancing based on a hash applied to IP addresses or costs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2101/00Indexing scheme associated with group H04L61/00
    • H04L2101/60Types of network addresses
    • H04L2101/686Types of network addresses using dual-stack hosts, e.g. in Internet protocol version 4 [IPv4]/Internet protocol version 6 [IPv6] networks

Definitions

  • the present invention relates to the field of communications technologies, and in particular, to a data center system and apparatus, and a method of providing a service. Background technique
  • IPv6 Internet Protocol Version 6, Version 6 Internet Protocol
  • IPv4 Internet Protocol Version 4, Fourth Edition Internet Protocol
  • IPv6 has advantages over IPv4: Larger address space, smaller routing tables, increased support for enhanced multicast and support for convection, etc., have great development opportunities and provide a good network for service shield control. platform. Therefore, migrating existing IPv4 network data to an IPv6 network is an important issue in current network service research.
  • IPv4/IPv6 network data migration methods include the following three methods:
  • the dual-stack technology needs to enable IPv4/IPv6 network protocols for all network devices in the data center.
  • the deployment cost is too high, and some older devices do not support the IPv6 protocol.
  • the dual stack has high performance requirements for the network itself. Many devices do not evaluate IPv6 when they are selected, so the risk is uncontrollable.
  • the IPv4 network and the IPv6 network are independent of each other and cannot implement data interworking.
  • NAT Network Address Translation
  • DNS Domain Name System
  • NAT64 is a stateful network address and protocol conversion technology. Generally, it only supports IPv6 network-side users to initiate connections to access IPv4 side network resources. However, NAT64 also supports the static mapping between the IPv6 network and the IPv4 network. NAT64 can implement IPv6 and IPv4 networks under TCP (Transmission Control Protocol), UDP (User Datagram Protocol), and ICMP (Internet Control Message Protocol). Address and protocol conversion. DNS64 works mainly with NAT64. It mainly synthesizes the A record (IPv4 address) in the DNS query information into the AAAA record (IPv6 address), and returns the synthesized AAAA record user to the IPv6 side user. DNS64 also addresses the flaw in DNS-ALG in NAT-PT. NAT64 works in conjunction with DNS64 without any modifications on the IPv6 client or IPv4 server. NAT64 solves most of the defects in NAT-PT, and cooperates with DNS64 to work without DNS-ALG like NAT-PT.
  • Figure 2 shows the networking of common application scenarios of NAT64 and DNS64.
  • the DNS64 server It is completely independent of each other with NAT64 routers.
  • 64:FF9B: :/96 is a well-known prefix of DNS64.
  • DNS64 generally uses this prefix to synthesize IPv4 address to IPv6 address.
  • the prefix is also used as the NAT64 conversion prefix.
  • the traffic matching the prefix is only translated by NAT64. .
  • the prefix is expressed as pref64::/n, which can be configured according to the actual network deployment.
  • the traffic will match the IPv6 default route and be forwarded directly to the IPv6 router for processing.
  • the DNS64 server will perform prefix synthesis, and the traffic of the Pref64: :/n network segment will be forwarded to the NAT64 router, thereby realizing the conversion of IPv6 and IPv4 addresses and protocols, and accessing the IPv4 network. resource of.
  • Figure 3 shows the interaction between DNS64 and NAT64.
  • the network address structure is as follows:
  • the NAT64/DNS64 technology has the following problems:
  • FIG. 4 shows a schematic diagram of IPv6 address subset and IPv4 address mapping using IVI. As shown in Figure 4, the IPv6 address subset and the IPv4 address-map are used so that the mapped address subset can communicate with IPv6.
  • IVI technology has the following problems:
  • IDC Internet Data Center
  • a first object of the present invention is to provide a data center system that can implement different types of networks transparently without changing the existing IDC network structure, large-scale system, and application upgrade. Provide services between.
  • a second object of the present invention is to provide a method of providing a service in a data center.
  • a third object of the present invention is to provide a four-layer load balancing device that can perform traffic distribution on a back-end load balancing device according to a scheduling policy.
  • a fourth object of the present invention is to provide a seven-layer load balancing device that not only performs traffic distribution to a server at a back end according to a scheduling policy, but also describes different types of networks.
  • a fifth object of the present invention is to provide an evolved deployment method based on the foregoing data center system, which can deploy a corresponding type of load balancing device to meet network performance requirements at different development stages of the network.
  • an embodiment of the first aspect of the present invention provides a data center system, including a data center system, including at least one first load balancing device, multiple second load balancing devices, and multiple servers.
  • the first load balancing device is connected to the core network device, each of the plurality of second load balancing devices is connected to the first load balancing device, and each of the multiple servers is associated with The plurality of second load balancing devices are connected to each other, wherein: the first load balancing device is configured to receive a first type of network request sent by the client through the core network device, and use a first scheduling policy
  • the one of the plurality of second load balancing devices forwards the first type of network request;
  • the plurality of second load balancing devices are configured to receive the first type of network request forwarded by the first load balancing device, and Converting the first type of network request into a second type of network request, and performing a source address and a destination address on the second type of network request And forwarding, according to the second scheduling policy
  • the services provided between different types of networks can be transparently implemented without changing the existing IDC network structure, large-scale system, and application upgrade.
  • the embodiment of the present invention can improve the reliability of system operation through two layers of load balancing.
  • the embodiment of the second aspect of the present invention provides a method for providing a service in a data center, including the following steps:
  • the client sends a first type of network request to the first load balancing device through the core network device;
  • the first load balancing device Transmitting, by the first scheduling policy, the first type of network request to one of the plurality of second load balancing devices;
  • the second load balancing device converting the first type of network request forwarded by the first load balancing device For the second type of network request, converting the source address and the destination address to the second type of network request, and according to the second scheduling policy Forwarding the second type of network request to one of the plurality of servers; and receiving, by the server, the second type of network request forwarded by the second load balancing device, and generating according to the second type of network request The second type of network response.
  • the service can be provided between different types of networks transparently without changing the existing IDC network structure, large-scale system, and application upgrade.
  • the embodiment of the third aspect of the present invention provides a four-layer load balancing device, including: a first transmission module, where the first transmission module is connected to a core network device, and is configured to receive, send, by the client, by using the core network device a first type of network request; a first source conversion module, configured to perform source address and destination address translation on the first type of network request; and a first load balancing module, the first load balancing module and a plurality of seven
  • One of the layer load balancing devices is configured to forward, by using the first scheduling policy, the first type network request converted by the source address and the destination address to multiple seven-layer loads connected to the four-layer load balancing device.
  • One of the equalization devices One of the equalization devices.
  • the load balancing device of the back end can be allocated according to the scheduling policy, thereby expanding the bandwidth of the network device and the server, increasing the throughput, strengthening the network data processing capability, and improving the network. Flexibility and availability.
  • An embodiment of the fourth aspect of the present invention provides a seven-layer load balancing device, including: a second transmission module, where the second transmission module is connected to one of a plurality of four-layer load balancing devices, and is configured to receive from a fourth layer.
  • a first type of network request of the load balancing device configured to convert the four-layer network request into a second type of network request; and a second source conversion module, configured to perform the second type of network request a source address and destination address translation; and a second load balancing module, the second load balancing module is coupled to one of the plurality of servers, configured to convert the source address and the destination address by using a second scheduling policy The second type of network request is forwarded to one of the plurality of servers.
  • the first type of network request may be converted into the second type of network request to facilitate service provision between different types of networks, and the server of the back end is allocated traffic according to the scheduling policy, thereby It can extend the bandwidth of network devices and servers, increase throughput, enhance network data processing capabilities, and increase network flexibility and availability.
  • An embodiment of the fifth aspect of the present invention provides an evolved deployment method of a data center system according to the first aspect of the present invention, including the following steps: detecting traffic of a first type of network of a current network and distribution of traffic of a second type of network And deploying, according to the traffic of the first type of network and the traffic of the second type of network, the first load balancing device and the second load balancing device in the network, where, if the first class The first load balancing device and the second load balancing device are simultaneously deployed in the network, and the first load balancing device is configured by the first load balancing device, where the ratio of the traffic of the network to the traffic of the second type of network is lower than the first threshold. Traffic of a type of network is allocated and forwarded to The second load balancing device of the terminal; otherwise, only the first load balancing device is deployed in the network, and the traffic of the first type of network is allocated and forwarded to the server of the back end.
  • a load balancing device of a corresponding type is deployed in a different development stage of the network, and a load balancing device that may cause a network traffic bottleneck is removed at a timely manner, thereby satisfying network performance requirements at different development stages. Strong flexibility.
  • FIG. 1 is a schematic diagram of a conventional dual-stack technology for providing services between different types of networks
  • FIG. 2 is a schematic diagram of a conventional application scenario of NAT64 and DNS64 networking
  • FIG. 3 is a schematic diagram of a conventional NAT64 and DNS64 communication process
  • FIG. 4 is a schematic diagram of a conventional IPvI address subset and IPv4 address-mapping using IVI;
  • FIG. 5 is a schematic diagram of a data center system according to an embodiment of the present invention.
  • FIG. 6 is a flow chart of a method for providing a service in a data center according to an embodiment of the present invention
  • FIG. 7 is a schematic diagram of a four-layer load balancing device according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a seven-layer load balancing device according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of an evolution deployment method based on a data center system according to an embodiment of the present invention. detailed description
  • the data center system 100 in accordance with an embodiment of the present invention is described below with reference to FIG.
  • the data center system 100 provided by the embodiment of the present invention includes a plurality of first load balancing devices 110, a plurality of second load balancing devices 120, and a plurality of servers 130, wherein the plurality of first load balancing devices 1 10 is connected to the core network device 200, each of the plurality of second load balancing devices 120 is connected to the plurality of first load balancing devices 110, and each of the plurality of servers 130 is balanced with the plurality of second loads.
  • Device 120 is connected.
  • the plurality of first load balancing devices 110 are configured to receive a first type of network request sent by the client through the core network device 200, and perform a source address and a destination address translation on the first type of network request, and use the first scheduling policy to One of the plurality of second load balancing devices 120 forwards the source address and the destination address converted first type network request.
  • the plurality of second load balancing devices 120 are configured to receive the first type network request converted from the source address and the destination address forwarded by the first load balancing device 110, and convert the first type network request converted by the source address and the destination address. For the second type of network request, the source address and the destination address are converted to the second type of network request, and the second type of network request converted by the source address and the destination address is forwarded to one of the plurality of servers 130 according to the second scheduling policy.
  • Each server 130 receives a second type of network request from the second load balancing device 120, and generates a second type of network response according to the second type of network request, and returns a second type of network response to the corresponding second load balancing device 120. .
  • the second load balancing device 120 further converts the second type of network response into the first type of network response, and returns the first type of network response to the corresponding first load balancing device 110, and the first load balancing device 1 10 A type of network is returned to the corresponding client.
  • the services provided between different types of networks can be transparently implemented without changing the existing IDC network structure, large-scale system, and application upgrade.
  • first load balancing devices 1 10 are taken as an example, which is a preferred embodiment of the present invention.
  • the first load balancing device 110 can be one. In this embodiment, if the first load balancing device 110 is one, the first load balancing device 110 may not need to perform source address and destination address translation for the first type of network request.
  • the first load balancing device 1 10 converts the first type of network request to the source address and the destination address. In this way, the second load balancing device can feed back the network response to the corresponding first load balancing device when the feedback network responds.
  • the first load balancing device 110 sets its own address as the source address of the first type of network request, and records it as the first source address, and sets the second load balancing device 120 selected by the first scheduling policy as the first class.
  • the destination address of the network request is recorded as the first destination address. Therefore, when the second load balancing device 120 feeds back the data packet to the first load balancing device 110, the data packet may be fed back to the corresponding first load balancing device 110 according to the first source address set this time.
  • the second load balancing device 120 may return the converted first type network response to the corresponding first load balancing device 110 according to the first source address.
  • the first load balancing device 110 is in progress
  • the original source address is saved when the source address and the destination address are switched, so that the first type of network response fed back by the second load balancing device 120 can be fed back to the corresponding client according to the first source address.
  • the first load balancing device 110 is further configured to perform conversion of the source port and the destination port for the first type of network request. Similar to the first load balancing device no converting the first type of network request to the source address and the destination address, the first load balancing device 110 sets its own port as the source port of the first type of network request, which is recorded as the first The source port, and the second load balancing device 120 selected by the first scheduling policy is set as the destination port of the first type of network request, and is recorded as the first destination port. Therefore, when the second load balancing device 120 feeds back the data packet to the first load balancing device 110, the data packet can be fed back to the corresponding first load balancing device 110 according to the source port set this time.
  • the second load balancing device 120 can return the converted first type of network response to the corresponding first load balancing device 110 according to the first source port.
  • the first load balancing device 110 saves the original first source port when the source port and the destination port are switched, so that the second load balancing device can be used.
  • the first type of network response fed back by 120 is fed back to the corresponding client according to the first source port.
  • the first load balancing device 110 forwards the first type of network request converted by the source address and the destination address to the second load balancing device 120 corresponding to the first destination address by using the first scheduling policy.
  • the first scheduling policy includes a polling mode, a quintuple hashing policy, or a source address hashing policy. It is to be understood that the manner of the first scheduling policy is not limited thereto. The foregoing description of the first scheduling policy is for illustrative purposes only, and is not intended to limit the scope of the present invention.
  • the second load balancing device 120 uses the SOCKET method to convert between the first type of network request and the second type of network request, and between the first type of network response and the second type of network response. Then, the second type of network request is converted into a source address and a destination address, and the second type of network request for converting the source address and the destination address is converted to one of the plurality of servers 130 by using the second scheduling policy.
  • the second load balancing device 120 sets its own address as the source address of the second type of network request, records it as the second source address, and sets the server 130 selected by the second scheduling policy as the destination of the second type of network request. Address, recorded as the second destination address. It can be understood that the second source address is the same as the first destination address. Therefore, when the server 130 feeds back the data packet to the first load balancing device 110, the data packet can be fed back to the corresponding second load balancing device 120 according to the second source address set this time. For example, server 130 may return a second type of network response to the corresponding second load balancing device 120 based on the second source address.
  • the second load balancing device 120 saves the original second source address when the source address and the destination address are switched, so that the second type of network response fed back by the server 130 can be fed back to the corresponding first load according to the second source address.
  • Equalization device 1 10.
  • the second load balancing device 120 is further configured to convert the source port and the destination port of the second type of network request. Specifically, the second load balancing device 120 sets its own port as the second class. The source port of the network request is recorded as the second source port, and the server 130 selected by the second scheduling policy is set as the destination port of the second type network request, and is recorded as the second destination port. Therefore, when the server 130 feeds back the data packet to the second load balancing device 120, the data packet may be fed back to the corresponding second load balancing device 120 according to the second source port that is set this time. For example, server 130 may return a second type of network response to the corresponding second load balancing device 120 based on the second source port.
  • the second load balancing device 120 saves the original second source port when the source port and the destination port are switched, so that the second type of network response fed back by the server 130 can be fed back to the corresponding first load according to the second source port.
  • Equalization device 110 the second load balancing device 120 saves the original second source port when the source port and the destination port are switched, so that the second type of network response fed back by the server 130 can be fed back to the corresponding first load according to the second source port.
  • Equalization device 110 is also saves the original second source port when the source port and the destination port are switched, so that the second type of network response fed back by the server 130 can be fed back to the corresponding first load according to the second source port.
  • the second load balancing device 120 forwards the second type network request converted by the source address and the destination address to the server 130 corresponding to the destination address by using the second scheduling policy.
  • the second scheduling policy includes a polling mode, a URL (Universal Resource Locator) scheduling policy, a URL hash scheduling policy, or a consistent hash scheduling policy. It is to be understood that the manner of the second scheduling policy is not limited thereto, and the foregoing examples of the second scheduling policy are only for the purpose of example, and are not intended to limit the scope of the present invention.
  • the second load balancing device 120 converts the second type of network response returned by the server 130 into a first type of network response.
  • the second load balancing device 120 may use the SOCKET method to use the SOCKET method between the first type of network request and the second type of network request, and the first type of network response and the second type of network. Convert between responses. Then, the second load balancing device 120 returns the first type of network response to the corresponding first load balancing device 1 10 according to the set second source address, and returns the first type of network response to the first type of network response device by the first load balancing device 110. The corresponding client.
  • the first type of network may be an IPv6 network
  • the second type of network may be an IPv4 network
  • the first type of network request is an IPv6 request
  • the second type of network request is an IPv4 request
  • the first type of network response is an IPv6 response and the second type of network response is an IPv4 response.
  • the first load balancing device 110 may be a four-layer load balancing device, and the second load balancing device 120 may be a seven-layer load balancing device.
  • the first load balancing device 1 10 and the second load balancing device 120 may each be multiple.
  • the plurality of first load balancing devices 110 can work in an active/standby redundancy mode or a cluster mode.
  • the plurality of second load balancing devices 120 can also work in an active/standby redundancy mode or a cluster mode. Therefore, when a first load balancing device 110 or the second load balancing device 120 fails, the operation of the entire data center system is not affected, thereby improving the security of the entire system operation.
  • the services provided between different types of networks can be transparently implemented without changing the existing IDC network structure, large-scale system, and application upgrade.
  • the method for providing a service in a data center includes the following steps: S201: The client sends the first type of network request to the first load balancing device through the core network device.
  • the first load balancing device converts the source address and the destination address of the first type of network request, and uses the first scheduling policy to forward the source address and the destination address to one of the plurality of second load balancing devices. Class network request.
  • the first load balancing device sets its own address as the source address of the first type of network request, and records it as the first source address, and sets the second load balancing device selected by the first scheduling policy as the first type of network request.
  • the destination address is recorded as the first destination address. Therefore, when the second load balancing device has a data packet to be fed back to the first load balancing device, the data packet may be fed back to the corresponding first load balancing device according to the first source address set this time.
  • the first load balancing device saves the original first source address when performing source address and destination address translation, so that the data packet can be fed back to the corresponding client according to the first source address.
  • step S202 further includes: converting, by the first load balancing device, the source port and the destination port to the first type of network request. Similar to the first load balancing device converting the source address and the destination address of the first type of network request, the first load balancing device sets its own port as the source port of the first type of network request, and is recorded as the first source port. And setting the second load balancing device selected by the first scheduling policy as the destination port of the first type of network request, which is recorded as the first destination port. Therefore, when the second load balancing device has a data packet to be fed back to the first load balancing device, the data packet may be fed back to the corresponding first load balancing device according to the source port set this time. In addition, the first load balancing device saves the original first source port when the source port and the destination port are switched, so that the data packet can be fed back to the corresponding client according to the first source port.
  • the first load balancing device forwards the first type network request converted by the source address and the destination address to the second load balancing device corresponding to the first destination address by using the first scheduling policy.
  • the first scheduling policy comprises a polling mode, a quintuple hashing policy or a source address hashing policy. It is to be understood that the manner of the first scheduling policy is not limited thereto. The foregoing description of the first scheduling policy is for illustrative purposes only, and is not intended to limit the scope of the present invention.
  • the second load balancing device converts the first type network request that is forwarded by the first load balancing device through the source address and the destination address into a second type network request, and performs source address and destination address conversion on the second type network request. And forwarding, according to the second scheduling policy, the second type of network request to one of the multiple servers.
  • the second load balancing device uses the SOCKET mode to convert the first type of network request into the second type of network request. Then, the second type of network request is converted into a source address and a destination address, and the second type of network request for converting the source address and the destination address is converted to one of the plurality of servers by using the second scheduling policy.
  • the second load balancing device sets its own address as the source address of the second type of network request, and records it as the second source address, and sets the server selected by the second scheduling policy as the destination address of the second type network request. Recorded as the second destination address. It can be understood that the second source address is the same as the first destination address. Therefore, when the server feeds back the data packet to the first load balancing device, the data packet may be fed back to the corresponding second load balancing device according to the second source address set this time. In addition, the second load balancing device saves the original second source address when the source address and the destination address are switched, so that the data fed back by the server can be fed back to the corresponding first load balancing device according to the second source address.
  • step 203 further includes setting, by the second load balancing device, its own port as the source port of the second type of network request, which is recorded as the second source port.
  • the second source port is the first destination. port.
  • the second load balancing device sets the server selected by the second scheduling policy as the destination port of the second type network request, and records it as the second destination port. Therefore, when the server needs to feed back the data packet to the second load balancing device, the data packet may be fed back to the corresponding second load balancing device according to the second source port that is set this time.
  • the second load balancing device saves the original second source port when the source port and the destination port are switched, so that the data fed back by the server can be fed back to the corresponding first load balancing device according to the second source port.
  • the second load balancing device forwards the second type network request converted by the source address and the destination address to the server corresponding to the destination address by using the second scheduling policy.
  • the second scheduling policy includes a polling mode, a URL scheduling policy, a URL hash scheduling policy, or a consistent hash scheduling policy. It is to be understood that the manner of the second scheduling policy is not limited thereto, and the foregoing examples of the second scheduling policy are only for the purpose of example, and are not intended to limit the scope of the present invention.
  • S204 The server receives the second type network request forwarded by the second load balancing device, and generates a second type network response according to the second type network request, and returns the second type network response to the corresponding second load balancing device.
  • the second load balancing device converts the second type of network response into the first type of network response, and returns the first type of network response to the corresponding first load balancing device, and the first type of network is used by the first load balancing device. The response is returned to the appropriate client.
  • the second load balancing device converts the second type of network response to the first type of network response in a SOCKET manner.
  • the second load balancing device returns the first type of network response to the corresponding first load balancing device according to the second source address set in step 203, and the first load balancing device responds to the first type of network response according to the first source address. Return to the appropriate client.
  • first type of network request and the first type of network response belong to the same network type
  • second type of network request and the second type of network response belong to the same network type
  • the first type of network may be an IPv6 network
  • the second type of network may be an IPv4 network
  • the first type of network request is an IPv6 request
  • the second type of network request is an IPv4 request
  • the first type of network response is an IPv6 response and the second type of network response is an IPv4 response.
  • the first load balancing device may be a four-layer load balancing device
  • the second load balancing device may be a seven-layer load balancing device.
  • the first load balancing device and the second load balancing device may each be multiple.
  • the plurality of first load balancing devices may work in an active/standby redundancy mode or a cluster mode.
  • Second load balancing devices can also work in active/standby redundancy mode or cluster mode. Therefore, when a first load balancing device or a second load balancing device fails, the operation of the entire data center system is not affected, thereby improving the security of the entire system operation.
  • the service can be provided between different types of networks transparently without changing the existing IDC network structure, large-scale system, and application upgrade.
  • a four-layer load balancing device 300 according to an embodiment of the present invention will be described below with reference to FIG.
  • the four-layer load balancing device 300 provided by the embodiment of the present invention includes a first transmission module 310, a second source conversion module 320, and a first load balancing module 330.
  • the first transmission module 310 is coupled to the core network device for receiving a first type of network request, such as an IPv6 request, sent by the client through the core network device.
  • the first source conversion module 320 is configured to perform source address and destination address translation on the first type of network request.
  • the first load balancing module 330 is connected to one of the plurality of second load balancing devices (and the seven-layer load balancing device), and is configured to use the first scheduling policy to convert the source address and the destination address to the first type of network.
  • the request is forwarded to one of the plurality of second load balancing devices.
  • the load balancing device of the back end can be allocated according to the scheduling policy, thereby expanding the bandwidth of the network device and the server, increasing the throughput, strengthening the network data processing capability, and improving the network. Flexibility and availability.
  • the second load balancing device may be a seven-layer load balancing device.
  • the first source conversion module 320 will send the first type of network request by the client through the core network device for source address and destination address translation. Specifically, the first source switching module 320 sets the address of the four-layer load balancing device 300 as the source address of the first type of network request, records the first source address, and sets the second load balancing selected by the first scheduling policy. The destination address of the device as the first type of network request is recorded as the first destination address. Therefore, when the second load balancing device has a data packet to be fed back to the first load balancing device, the data packet may be fed back to the corresponding four-layer load balancing device 300 according to the first source address set this time. In addition, the four-layer load balancing device 300 saves the original first source address when the source address and the destination address are switched, so that the first transmission module 310 can feed the data packet to the corresponding client according to the first source address.
  • the first source conversion module 320 is further configured to perform conversion of the source port and the destination port for the first type of network request. Specifically, the first source switching module 320 sets the port of the four-layer load balancing device 300 as the source port of the first type of network request, and records it as the first source port, and sets the first scheduling policy to be selected.
  • the second load balancing device is used as the destination port of the first type of network request, and is recorded as the first destination port. Therefore, when the second load balancing device has a data packet to be fed back to the four-layer load balancing device 300, the data packet may be fed back to the corresponding four-layer load balancing device 300 according to the source port that is set this time.
  • the first type of network response returned by the second load balancing device through the four-layer load balancing device 300 is further returned to the corresponding client.
  • the four-layer load balancing device 300 saves the original first source port when the source port and the destination port are switched, so that the first transmission module 310 can feed the data packet to the corresponding client according to the first source port.
  • the four-layer load balancing device 300 performs traffic scheduling on the second load balancing device of the back end based on the TCP/IP protocol, and forwards the first type network request converted by the source address and the destination address to the first scheduling policy by using the first scheduling policy. a second load balancing device corresponding to the first destination address.
  • the first scheduling policy includes a polling mode, a five-tuple hashing policy, or a source address hashing policy. It is to be understood that the manner of the first scheduling policy is not limited thereto. The foregoing description of the first scheduling policy is for illustrative purposes only, and is not intended to limit the scope of the present invention.
  • the four-layer load balancing device 300 of the embodiment of the present invention further includes a first defense module and a first backend inspection module.
  • the first defense module is used to defend against attacks on the four-layer load balancing device 300.
  • the first defense module has a four-layer DDoS (Distributed Denial of Service) defense function, and mainly defends against service attacks on the transport layer, for example, flag attacks such as SYN/ACK.
  • the first defense module can provide defense against attacks such as SYN FLOOD attacks, ACK STORMs, and the like.
  • the first backend checking module is configured to check the current service state and current device state of the four-layer load balancing device 300, and automatically process the fault when the four-layer load balancing device 300 fails. Specifically, when the four-layer load balancing device 300 is in operation, when a service or a machine fails, the first back-end checking module automatically detects the fault and processes the fault, thereby securing the four-layer load.
  • the normal operation of the equalization device 300 does not affect the operation of the entire device due to the failure of a certain service or machine.
  • the load balancing device of the back end can be allocated according to the scheduling policy, thereby expanding the bandwidth of the network device and the server, increasing the throughput, strengthening the network data processing capability, and improving the network. Flexibility and availability.
  • a seven-layer load balancing device 400 according to an embodiment of the present invention will be described below with reference to FIG.
  • the seven-layer load balancing device 400 provided by the embodiment of the present invention includes a second transmission module 410, a network conversion module 420, a second source conversion module 430, and a second load balancing module 440.
  • the second transmission module 410 is connected to one of the plurality of first load balancing devices (ie, the four-layer load balancing device), and is configured to receive the first type of network request that is converted by the source address and the destination address from the first load balancing device, example For example, an IPv6 request.
  • the network conversion module 420 is configured to convert the first type of network request that has undergone source address and destination address translation into a second type of network request, for example, an IPv4 request.
  • the second source conversion module 430 is configured to perform source address and destination address translation on the second type of network request.
  • the second load balancing module 440 is coupled to one of the plurality of servers for forwarding the second type of network request that is converted by the source address and the destination address to one of the plurality of servers by using the second scheduling policy.
  • the first type of network request may be converted into the second type of network request to facilitate service provision between different types of networks, and the server of the back end is allocated traffic according to the scheduling policy, thereby It can extend the bandwidth of network devices and servers, increase throughput, enhance network data processing capabilities, and increase network flexibility and availability.
  • the first load balancing device may be a four-layer load balancing device.
  • the network conversion module 420 can convert the first type of network request that has been converted by the source address and the destination address into the second type of network request by using the SOCKET method.
  • the second source conversion module 430 converts the source address and the destination address of the second type of network request sent by the network conversion module 420. Specifically, the second source conversion module 430 sets the address of the seven-layer load balancing device 400 as the source address of the second type network request, and records it as the second source address, and sets the server selected by the second scheduling policy as the second. The destination address of the class network request is recorded as the second destination address. Therefore, when the server has a data packet to be fed back to the seven-layer load balancing device 400, the data packet may be fed back to the corresponding seven-layer load balancing device 400 according to the second source address set this time. In addition, the seven-layer load balancing device 400 saves the original second source address when performing source address and destination address translation, so that the second transmission module 310 can feed the data packet to the corresponding first load balancing according to the second source address. device.
  • the second source conversion module 430 is further configured to perform conversion of the source port and the destination port for the second type of network request. Specifically, the second source conversion module 430 sets the port of the seven-layer load balancing device 400 as the source port of the second type network request, denotes the second source port, and sets the server selected by the second scheduling policy as the second port. The destination port of the class network request is recorded as the second destination port. Therefore, when the server needs to feed back the data packet to the seven-layer load balancing device 400, the data packet can be fed back to the corresponding seven-layer load balancing device 400 according to the source port set this time. In addition, the seven-layer load balancing device 400 saves the original second source port when the source port and the destination port are switched, so that the second transmission module 410 can feed the data packet according to the second source port to the corresponding first load balancing. device.
  • the seven-layer load balancing device 400 performs traffic scheduling on multiple servers at the back end based on characteristics such as a URL, and forwards the second type network request converted by the source address and the destination address to the second destination address by using the second scheduling policy.
  • the second scheduling policy includes a polling mode, a URL Scheduling policy, URL hash scheduling policy, or consistent hash scheduling policy. It is to be understood that the manner of the second scheduling policy is not limited thereto, and the foregoing examples of the second scheduling policy are only for the purpose of example, and are not intended to limit the scope of the present invention.
  • the server responds with the second type of network request sent by the seven-layer load balancing device 400, and generates a corresponding second type of network response.
  • the server returns the second type of network response described above to the corresponding seven-layer load balancing device 400 based on the second source address.
  • the network conversion module 420 is further configured to convert the second type of network response returned by the server into a first type of network response, and the second type of network response is returned by the second transmission module 410 to the corresponding first load balancing device.
  • the seven-layer load balancing device 400 of the embodiment of the present invention further includes a second defense module and a second backend inspection module.
  • the second defense module is used to defend against attacks on the seven-layer load balancing device 400.
  • the second defense module has a seven-layer DDoS defense function to defend against service attacks on the application layer.
  • the second defense module can provide defense against attacks such as URL threshold ban, IP threshold ban, and the like.
  • the second backend checking module is configured to check the current service state and current device state of the seven-layer load balancing device 400, and automatically process the fault when the seven-layer load balancing device 400 fails. Specifically, when the seven-layer load balancing device 400 is in operation, when a service or a machine fails, the second back-end checking module automatically detects the fault and processes the fault, thereby securing the seven-layer load.
  • the normal operation of the equalization device 400 does not affect the operation of the entire device due to the failure of a certain service or machine.
  • the first type of network request may be converted into the second type of network request to facilitate service provision between different types of networks, and the server of the back end is allocated traffic according to the scheduling policy, thereby It can extend the bandwidth of network devices and servers, increase throughput, enhance network data processing capabilities, and increase network flexibility and availability.
  • the data center system can be the data center system 100 provided by the above embodiment of the present invention.
  • the method for deploying a data center system includes the following steps:
  • S501 Detecting the distribution status of the traffic of the first type network of the current network and the traffic of the second type network.
  • S 502 Deploy the first load balancing device and the second load balancing device in the network according to the traffic of the first type of network and the distribution state of the traffic of the second type of network.
  • the first load balancing device and the second load balancing device are simultaneously deployed in the network, and the first load balancing device is configured by the first load balancing device.
  • the traffic of one type of network is allocated and forwarded to the second load balancing device of the back end. Otherwise, only the first load balancing device is deployed in the network, and the traffic of the first type of network is allocated and forwarded to the server of the back end.
  • a load balancing device of a corresponding type is deployed in a different development stage of the network, and a load balancing device that may cause a network traffic bottleneck is removed at a timely manner, thereby satisfying network performance requirements at different development stages. Strong flexibility.
  • the first type of network may be an IPv6 network
  • the second type of network is an IPv4 network.
  • the following takes the IPv6 network and the IPv4 network as an example to describe the evolution deployment method in detail.
  • the IPv6 network traffic and the IPv4 network traffic in the current network are detected, and then the first load balancing device and the second load balancing device in the network are deployed according to the distribution of the IPv6 network traffic and the IPv4 network traffic.
  • the first load balancing device is a four-layer load balancing device
  • the second load balancing device is a seven-layer load balancing device.
  • the ratio of the traffic of the IPv6 network to the IPv4 network traffic is lower than the first threshold, that is, the current network is in the initial stage of the IPv6 network, and the IPv6 network traffic is small.
  • the seven-layer load balancing device uses the dual-stack protocol, that is, the seven-layer load balancing device uses the IPv4 protocol stack and the IPv6 protocol stack.
  • the four-layer load balancing device forwards the IPv4 request directly to the back-end service server, and distributes the IPv6 request according to the traffic to the back-end seven-layer load balancing device.
  • a four-layer load balancing device uses a first scheduling policy to distribute and forward traffic to a back-end seven-layer load balancing device.
  • the first scheduling policy may include a polling mode, a quintuple hashing policy, or a source address hashing policy.
  • the seven-layer load balancing device converts the IPv6 request from the four-layer load balancing device into an IPv4 request, and sends the IPv4 request to the back-end service server, and the back-end service server responds to the IPv4 request.
  • the ratio of the traffic of the IPv6 network to the traffic of the IPv4 network is between the first threshold and the second threshold or is higher than the second threshold, only four layers of load balancing devices are deployed in the network, and the IPv6 traffic is allocated and forwarded to the network. End of the business server. Specifically, when the ratio of the traffic of the IPv6 network to the traffic of the IPv4 network is between the first threshold and the second threshold, the current network is in the middle of the IPv6 network, and the IPv6 network traffic and the IPv4 network traffic are close to each other. Only four layers of load balancing devices are deployed to prevent the seven-layer load balancing device from becoming a network traffic bottleneck.
  • a four-layer load balancing device can distribute and forward traffic to the back-end servers.
  • the ratio of the traffic of the IPv6 network to the traffic of the IPv4 network is higher than the second threshold, only four layers of load balancing devices are deployed in the network, and the IPv6 traffic is allocated and forwarded to the service server of the back end.
  • the ratio of the traffic of the IPv6 network to the traffic of the IPv4 network is higher than the second threshold, the current network is in the later stage of the IPv6 network, and there is a small amount of IPv4 network traffic in the Internet.
  • the server of the front end of the service is set as a dual stack server, that is, the dual protocol stack is used, including the IPv4 protocol stack and the IPv6 protocol stack, so that it can be directly accessed and used as a four-layer load.
  • a four-tier load balancing device can distribute and forward traffic to servers on the back end. If the IPv4 network traffic of the current network is 0, it indicates that the IPv4/IPv6 transition period ends.
  • the current network is an IPv6 network.
  • the server uses the IPv6 protocol stack, so that it can directly access and serve as the back-end server of the four-layer load balancing device.
  • a four-tier load balancing device can distribute and forward traffic to servers on the back end.
  • a load balancing device of a corresponding type is deployed in a different development stage of the network, and a load balancing device that may cause a network traffic bottleneck is removed at a timely manner, thereby satisfying network performance requirements at different development stages. Strong flexibility.
  • each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated module can also be stored in a computer readable storage medium if it is implemented as a software function module and sold or used as a standalone product.
  • the storage medium mentioned above may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Computer And Data Communications (AREA)

Abstract

本发明公开了一种数据中心***,包括多个第一负载均衡设备用于接收来自客户端通过核心网络设备发送的第一类网络请求并进行源地址和目的地址转换;多个第二负载均衡设备用于接收第一类网络请求并将其转换为第二类网络请求,对第二类网络请求进行源地址和目的地址的转换;多个服务器用于接收来自第二负载均衡设备的第二类网络请求,并生成第二类网络响应,将第二类网络响应返回给相应的第二负载均衡设备。本发明可以在不改变现有的IDC网络结构、大规模***和应用程序升级的前提下,简单透明地实现不同类网络之间提供服务。本发明还公开了一种数据中心提供服务的方法、基于上述数据中心***的演进部署方法、四层负载均衡设备以及七层负载均衡设备。

Description

数据中心***及装置和提供服务的方法 技术领域
本发明涉及通信技术领域, 特别涉及一种数据中心***及装置和提供服务的方法。 背景技术
IPv6 ( Internet Protocol Version 6, 第六版互联网协议 ) 是用于替代现行版本 IP协 议 IPv4 ( Internet Protocol Version 4, 第四版互联网协议) 的下一代 IP协议。 IPv6 相对于 IPv4 具有优势: 更大的地址空间、 使用更小的路由表以及增加了增强的组播支 持以及对流的支持等, 具有长足的发展机会, 可以为服务盾量控制提供了良好的网络平 台。 因此, 如果将现有的 IPv4网络数据迁移到 IPv6网络, 是当前网络服务研究的一个 重要问题。
传统的 IPv4/IPv6网络数据迁移方法包括以下三种:
( 1 ) 双栈技术
如图 1所示, 双栈技术需要将数据中心的所有网络设备开启 IPv4/IPv6网络协议, 部署成本过高, 而有些老的设备并不支持 IPv6 协议。 此外, 双栈对网络的本身性能要 求很高, 很多设备在选型时没有对 IPv6进行评估, 因此风险不可控。 双栈技术中, IPv4 网络和 IPv6 网络是相互独立的, 不能实现数据互通。 (2 ) NAT ( Network Address Translation, 网络地址转换) 64/DNS ( Domain Name System, 域名***) 64
NAT64是一种有状态的网络地址与协议转换技术, 一般只支持通过 IPv6网络侧用 户发起连接访问 IPv4侧网络资源。 但 NAT64也支持通过手工配置静态映射关系, 实现 IPv4 网络主动发起连接访问 IPv6 网络。 NAT64 可实现 TCP ( Transmission Control Protocol, 传输控制协议) 、 UDP ( User Datagram Protocol, 用户数据包协议) 、 ICMP ( Internet Control Message Protocol, Internet控制 艮文十办议 )十办议下的 IPv6与 IPv4网络 地址和协议转换。 DNS64则主要是配合 NAT64工作, 主要是将 DNS查询信息中的 A 记录( IPv4地址)合成到 AAAA记录( IPv6地址) 中, 返回合成的 AAAA记录用户给 IPv6侧用户。DNS64也解决了 NAT-PT中的 DNS-ALG存在的缺陷。 NAT64—般与 DNS64 协同工作,而不需要在 IPv6客户端或 IPv4服务器端做任何修改。 NAT64解决了 NAT-PT 中的大部分缺陷, 同时配合 DNS64的协同工作, 无需像 NAT-PT中的 DNS-ALG等。
图 2示出了 NAT64与 DNS64的常见应用场景组网。 如图 2所示, DNS64 服务器 与 NAT64 路由器彼此完全独立。 其中, 64:FF9B: :/96 为 DNS64 的知名前缀, DNS64 一般默认使用此前缀进行 IPv4地址到 IPv6地址的合成, 同时该前缀也作为 NAT64的 转换前缀, 实现匹配该前缀的流量才做 NAT64转换。 一般在 DNS64与 NAT64中该前 缀被表示为 pref64::/n, 该前缀可根据实际网络部署进行配置。 当用户侧 IPv6发起连接 访问普通 IPv6网站, 流量将会匹配 IPv6默认路由而直接转发至 IPv6路由器处理。访问 IPv4单协议栈的服务器时, 将经 DNS64服务器进行前缀合成, Pref64: :/n网段的流量将 被路由转发至 NAT64 路由器上,从而实现 IPv6与 IPv4地址和协议的转换,访问 IPv4 网 络中的资源。
图 3示出了 DNS64与 NAT64的 4艮文交互过程。 如图 3所示, 网络地址结构如下:
IPv6 Only Client: 2001 : : 1234: : 1234;
Pref64: :/n: 64:FF9B: :/96
NAT64 Public IPv4 Address: 22.22.22.22
WWW.IPV6BBS.CN IPV4 Address: 11.11.11.11
NAT64/DNS64技术具有下述问题:
( A ) 需要与 DNS进行强耦合;
( B ) 只支持通过 IPv6 网络侧用户发起连接访问 IPv4侧网络资源, 通常部署于用 户侧;
( C )有状态地址映射;
( D ) 地址池需要大量公网地址。
( 3 ) IVI ( The transition to IPv6 )
图 4示出了釆用 IVI实现 IPv6的地址子集与 IPv4地址——映射的示意图。 如图 4 所示, 使用 IPv6的地址子集与 IPv4地址——映射, 从而使得该映射后的地址子集可以 为与 IPv6互相通信。 但是, IVI技术存在下述问题:
( A ) IVI不适用于 IDC ( Internet Data Center, 即互联网数据中心 )数据中心使用;
( B ) 需要与 DNS强耦合;
( C ) IVI一般部署于 ISP(Internet Service Provider, 互联网服务提供商)网络。 综上所述, 传统的 IPv4/IPv6 网络数据迁移方法部署成本较高、 风险大并且具有一 定的部署限制, 从而无法满足大规模数据中心的迁移需求。 发明内容
本发明的目的旨在至少解决上述现有技术中存在的技术缺陷之一。 为此, 本发明的第一个目的在于提供一种数据中心***, 该***可以在不改变现有 的 IDC网络结构、 大规模***和应用程序升级的前提下, 筒单透明地实现不同类网络之 间提供服务。
本发明的第二个目的在于提供一种数据中心提供服务的方法。
本发明的第三个目的在于提供一种四层负载均衡设备, 该四层负载均衡设备可以根 据调度策略对后端的负载均衡设备进行流量分配。
本发明的第四个目的在于提供一种七层负载均衡设备, 该七层负载均衡设备不仅可 以根据调度策略对后端的服务器进行流量分配, 而且可以对不同类网络进行说明。
本发明的第五个目的在于提供一种基于上述数据中心***的演进部署方法, 该部署 方法可以在网络的不同发展阶段, 部署相应类型负载均衡设备以满足网络性能需求。
为实现上述目的, 本发明第一方面的实施例提出了一种数据中心***, 包括一种数 据中心***, 包括至少一个第一负载均衡设备, 多个第二负载均衡设备,及多个服务器, 且, 所述第一负载均衡设备与核心网络设备相连, 所述多个第二负载均衡设备的每一个 均与所述第一负载均衡设备相连, 及, 所述多个服务器的每一个均与所述多个第二负载 均衡设备相连, 其中: 所述第一负载均衡设备, 用于接收来自客户端通过所述核心网络 设备发送的第一类网络请求, 并釆用第一调度策略向所述多个第二负载均衡设备中的一 个转发所述第一类网络请求; 所述多个第二负载均衡设备, 用于接收来自所述第一负载 均衡设备转发的第一类网络请求, 并将所述第一类网络请求转换为第二类网络请求, 以 及对所述第二类网络请求进行源地址和目的地址的转换, 和根据第二调度策略向所述多 个服务器中的一个转发所述经过源地址和目的地址转换的第二类网络请求; 和所述多个 服务器, 用于接收来自所述第二负载均衡设备的所述第二类网络请求, 并根据所述第二 类网络请求生成第二类网络响应, 以及将所述第二类网络响应返回给相应的第二负载均 衡设备。
根据本发明实施例的数据中心***, 可以在不改变现有的 IDC网络结构、 大规模系 统和应用程序升级的前提下, 筒单透明地实现不同类网络之间提供服务。 此外, 本发明 实施例通过两层负载均衡可以提高***运行的可靠性。
本发明第二方面的实施例提出了一种数据中心提供服务的方法, 包括如下步骤: 客 户端通过核心网络设备向第一负载均衡设备发送第一类网络请求; 所述第一负载均衡设 备釆用第一调度策略向多个第二负载均衡设备中的一个转发所述第一类网络请求; 所述 第二负载均衡设备将由所述第一负载均衡设备转发的所述第一类网络请求转换为第二 类网络请求, 对所述第二类网络请求进行源地址和目的地址的转换, 并根据第二调度策 略向多个服务器中的一个转发所述第二类网络请求; 和所述服务器接收由所述第二负载 均衡设备转发的所述第二类网络请求, 并根据所述第二类网络请求生成第二类网络响 应。
根据本发明实施例的数据中心提供服务的方法, 可以在不改变现有的 IDC 网络结 构、 大规模***和应用程序升级的前提下, 筒单透明地实现不同类网络之间提供服务。
本发明第三方面的实施例提供了一种四层负载均衡设备, 包括: 第一传输模块, 所 述第一传输模块与核心网络设备相连, 用于接收来自客户端通过所述核心网络设备发送 的第一类网络请求; 第一源目转换模块, 用于将所述第一类网络请求进行源地址和目的 地址转换; 和第一负载均衡模块, 所述第一负载均衡模块与多个七层负载均衡设备中的 一个相连, 用于釆用第一调度策略将所述源地址和目的地址转换后的第一类网络请求转 发给与所述四层负载均衡设备相连的多个七层负载均衡设备中的一个。
根据本发明实施例的四层负载均衡设备, 可以根据调度策略对后端的负载均衡设备 进行流量分配, 从而可以扩展网络设备和服务器的带宽、 增加吞吐量、 加强网络数据处 理能力、 并且提高网络的灵活性和可用性。
本发明第四方面的实施例提供了一种七层负载均衡设备, 包括: 第二传输模块, 所 述第二传输模块与多个四层负载均衡设备中的一个相连, 用于接收来自四层负载均衡设 备的第一类网络请求; 网络转换模块, 用于将所述四层类网络请求转换为第二类网络请 求; 第二源目转换模块, 用于将所述第二类网络请求进行源地址和目的地址转换; 和第 二负载均衡模块, 所述第二负载均衡模块与多个服务器中的一个相连, 用于釆用第二调 度策略将所述经过源地址和目的地址转换的第二类网络请求转发给所述多个服务器中 的一个。
根据本发明实施例的七层负载均衡设备, 可以将第一类网络请求转换为第二类网络 请求以利于不同类型网络之间的服务提供, 并且根据调度策略对后端的服务器进行流量 分配, 从而可以扩展网络设备和服务器的带宽、 增加吞吐量、 加强网络数据处理能力、 并且提高网络的灵活性和可用性。
本发明第五方面的实施例提供了一种基于第一方面实施例的数据中心***的演进 部署方法, 包括如下步骤: 检测当前网络的第一类网络的流量和第二类网络的流量的分 布状态, 并根据所述第一类网络的流量和所述第二类网络的流量的分布状态对网络中的 第一负载均衡设备和第二负载均衡设备进行部署, 其中, 如果所述第一类网络的流量和 所述第二类网络的流量的比例低于第一阈值, 则在网络中同时部署所述第一负载均衡设 备和第二负载均衡设备, 由第一负载均衡设备将所述第一类网络的流量分配并转发给后 端的第二负载均衡设备; 否则在网络中仅部署所述第一负载均衡设备, 将所述第一类网 络的流量分配并转发给后端的服务器。
根据本发明实施例的演进部署方法, 在网络的不同发展阶段, 部署相应类型负载均 衡设备并且适时去除有可能造成网络流量瓶颈的负载均衡设备,从而满足了不同发展阶 段的网络性能需求, 具有较强的灵活性。
本发明附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得 明显, 或通过本发明的实践了解到。 附图说明
本发明上述的和 /或附加的方面和优点从下面结合附图对实施例的描述中将变得明 显和容易理解, 其中:
图 1为传统的釆用双栈技术实现不同类型网络之间提供服务的示意图;
图 2为传统的 NAT64和 DNS64的组网应用场景示意图;
图 3为传统的 NAT64和 DNS64的通信过程示意图;
图 4为传统的釆用 IVI实现 IPv6的地址子集与 IPv4地址——映射的示意图; 图 5为根据本发明实施例的数据中心***的示意图;
图 6为根据本发明实施例的数据中心提供服务的方法的流程图;
图 7为根据本发明实施例的四层负载均衡设备的示意图;
图 8为根据本发明实施例的七层负载均衡设备的示意图; 和
图 9为根据本发明实施例的基于数据中心***的演进部署方法的示意图。 具体实施方式
下面详细描述本发明的实施例, 所述实施例的示例在附图中示出, 其中自始至终相 同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。 下面通过参考附 图描述的实施例是示例性的, 仅用于解释本发明, 而不能解释为对本发明的限制。
参照下面的描述和附图, 将清楚本发明的实施例的这些和其他方面。 在这些描述和 附图中, 具体公开了本发明的实施例中的一些特定实施方式, 来表示实施本发明的实施 例的原理的一些方式, 但是应当理解, 本发明的实施例的范围不受此限制。 相反, 本发 明的实施例包括落入所附加权利要求书的精神和内涵范围内的所有变化、 修改和等同 物。
下面参考图 5描述根据本发明实施例的数据中心*** 100。 如图 5 所示, 本发明实施例提供的数据中心*** 100 包括多个第一负载均衡设备 110、 多个第二负载均衡设备 120和多个服务器 130 , 其中, 多个第一负载均衡设备 1 10 均与核心网络设备 200相连, 多个第二负载均衡设备 120中的每一个均与多个第一负载 均衡设备 1 10相连, 多个服务器 130中的每一个均与多个第二负载均衡设备 120相连。
多个第一负载均衡设备 110用于接收来自客户端通过核心网络设备 200发送的第一 类网络请求, 并将第一类网络请求进行源地址和目的地址转换, 并釆用第一调度策略向 多个第二负载均衡设备 120中的一个转发源地址和目的地址转换后的第一类网络请求。
多个第二负载均衡设备 120用于接收来自第一负载均衡设备 110转发的源地址和目 的地址转换后的第一类网络请求, 并将经过源地址和目的地址转换的第一类网络请求转 换为第二类网络请求, 对第二类网络请求进行源地址和目的地址的转换, 根据第二调度 策略向多个服务器 130中的一个转发经过源地址和目的地址转换的第二类网络请求。
每个服务器 130接收来自第二负载均衡设备 120的第二类网络请求, 并根据第二类 网络请求生成第二类网络响应,以及将第二类网络响应返回给相应的第二负载均衡设备 120。
第二负载均衡设备 120进一步将第二类网络响应转换为第一类网络响应, 并将第一 类网络响应返回至相应的第一负载均衡设备 110, 并由第一负载均衡设备 1 10将第一类 网络相应返回给相应的客户端。
根据本发明实施例的数据中心***, 可以在不改变现有的 IDC网络结构、 大规模系 统和应用程序升级的前提下, 筒单透明地实现不同类网络之间提供服务。
需要说明的是,在本发明的上述实施例中以多个第一负载均衡设备 1 10为例进行介 绍, 此为本发明的优选实施例。 在本发明的其他实施例中, 第一负载均衡设备 110可为 一个。 在该实施例中, 如果第一负载均衡设备 110为一个, 则第一负载均衡设备 110就 可以无需对第一类网络请求进行源地址和目的地址转换。
第一负载均衡设备 1 10将第一类网络请求进行源地址和目的地址的转换。 这样, 第 二负载均衡设备在反馈网络响应时就可将网络响应反馈至相应的第一负载均衡设备
110。 具体而言, 第一负载均衡设备 110设置其自身地址作为第一类网络请求的源地址, 记为第一源地址, 并设置通过第一调度策略选择的第二负载均衡设备 120作为第一类网 络请求的目的地址, 记为第一目的地址。 由此, 在第二负载均衡设备 120向第一负载均 衡设备 110反馈数据包时, 可以根据本次设置的第一源地址, 将数据包反馈给相应的第 一负载均衡设备 110。 例如, 第二负载均衡设备 120可以根据第一源地址将转换后的第 一类网络响应返回给相应的第一负载均衡设备 110。 此外, 第一负载均衡设备 110在进 行源地址和目的地址转换时保存原来的第一源地址, 从而可以将由第二负载均衡设备 120反馈的第一类网络响应根据上述第一源地址反馈给相应的客户端。
在本发明的一个实施例中, 第一负载均衡设备 110还用于对第一类网络请求进行源 端口和目的端口的转换。 与第一负载均衡设备 no将第一类网络请求进行源地址和目的 地址的转换相类似的, 第一负载均衡设备 1 10设置其自身端口作为第一类网络请求的源 端口, 记为第一源端口, 并设置通过第一调度策略选择的第二负载均衡设备 120作为第 一类网络请求的目的端口, 记为第一目的端口。 由此, 在第二负载均衡设备 120向第一 负载均衡设备 1 10反馈数据包时, 可以根据本次设置的源端口, 将数据包反馈给相应的 第一负载均衡设备 110。 例如, 第二负载均衡设备 120可以根据第一源端口将转换后的 第一类网络响应返回给相应的第一负载均衡设备 110。 此外, 第一负载均衡设备 110在 进行源端口和目的端口转换时保存原来的第一源端口,从而可以将由第二负载均衡设备
120反馈的第一类网络响应根据上述第一源端口反馈给相应的客户端。
第一负载均衡设备 110釆用第一调度策略将上述经过源地址和目的地址转换后的第 一类网络请求转发给与第一目的地址对应的第二负载均衡设备 120。 其中, 第一调度策 略包括轮询方式、 五元组哈希策略或源地址哈希策略。 可以理解的是, 第一调度策略的 方式不限于此, 上述对第一调度策略的举例仅是出于示例的目的, 而不是为了限制本发 明的保护范围。
第二负载均衡设备 120釆用 SOCKET方式在第一类网络请求和第二类网络请求之 间, 及第一类网络响应和第二类网络响应之间进行转换。 然后, 对第二类网络请求进行 源地址和目的地址的转换, 釆用第二调度策略将源地址和目的地址的转换的第二类网络 请求转换给多个服务器 130中的一个。
具体而言, 第二负载均衡设备 120设置其自身地址作为第二类网络请求的源地址, 记为第二源地址, 并设置通过第二调度策略选择的服务器 130作为第二类网络请求的目 的地址, 记为第二目的地址。 可以理解的是, 第二源地址和第一目的地址相同。 由此, 在服务器 130向第一负载均衡设备 110反馈数据包时,可以根据本次设置的第二源地址, 将数据包反馈给相应的第二负载均衡设备 120。 例如, 服务器 130可以根据第二源地址 将第二类网络响应返回给相应的第二负载均衡设备 120。 此外, 第二负载均衡设备 120 在进行源地址和目的地址转换时保存原来的第二源地址, 从而可以将由服务器 130反馈 的第二类网络响应根据上述第二源地址反馈给相应的第一负载均衡设备 1 10。
在本发明的一个实施例中, 第二负载均衡设备 120还用于对第二类网络请求的源端 口和目的端口进行转换。 具体而言, 第二负载均衡设备 120设置其自身端口作为第二类 网络请求的源端口, 记为第二源端口, 并设置通过第二调度策略选择的服务器 130作为 第二类网络请求的目的端口, 记为第二目的端口。 由此, 在服务器 130向第二负载均衡 设备 120反馈数据包时, 可以根据本次设置的第二源端口, 将数据包反馈给相应的第二 负载均衡设备 120。 例如, 服务器 130可以根据第二源端口将第二类网络响应返回给相 应的第二负载均衡设备 120。 此外, 第二负载均衡设备 120在进行源端口和目的端口转 换时保存原来的第二源端口,从而可以将由服务器 130反馈的第二类网络响应根据上述 第二源端口反馈给相应的第一负载均衡设备 110。
第二负载均衡设备 120釆用第二调度策略将上述经过源地址和目的地址转换后的第 二类网络请求转发给与目的地址对应的服务器 130。其中, 第二调度策略包括轮询方式、 URL ( Universal Resource Locator, 统一资源定位符) 调度策略、 URL哈希调度策略或 一致性哈希调度策略。 可以理解的是, 第二调度策略的方式不限于此, 上述对第二调度 策略的举例仅是出于示例的目的, 而不是为了限制本发明的保护范围。
第二负载均衡设备 120将由服务器 130返回的第二类网络响应转换为第一类网络响 应。 在本发明的一个实施例中, 第二负载均衡设备 120 可以釆用 SOCKET 方式釆用 SOCKET方式在第一类网络请求和第二类网络请求之间,及第一类网络响应和第二类网 络响应之间进行转换。 然后, 第二负载均衡设备 120根据设置的第二源地址将第一类网 络响应返回给相应的第一负载均衡设备 1 10 , 并由第一负载均衡设备 110将上述第一类 网络响应返回给相应的客户端。
在本发明的一个实施例中, 第一类网络可以为 IPv6 网络, 第二类网络可以为 IPv4 网络。 相应地, 第一类网络请求为 IPv6请求, 第二类网络请求为 IPv4请求。 第一类网 络响应为 IPv6响应, 第二类网络响应为 IPv4响应。
在本发明的一个实施例中, 第一负载均衡设备 110可以为四层负载均衡设备, 第二 负载均衡设备 120可以为七层负载均衡设备。 其中, 第一负载均衡设备 1 10和第二负载 均衡设备 120均可以为多个。 其中, 多个第一负载均衡设备 110可以釆用主备冗余模式 或集群模式协同工作。 多个第二负载均衡设备 120也可以釆用主备冗余模式或集群模式 协同工作。 由此, 当一个第一负载均衡设备 110或第二负载均衡设备 120出现故障时, 不会影响到整个数据中心***的工作, 从而提高了整个***运行的安全性。
根据本发明实施例的数据中心***, 可以在不改变现有的 IDC网络结构、 大规模系 统和应用程序升级的前提下, 筒单透明地实现不同类网络之间提供服务。
下面参考图 6描述根据本发明实施例的数据中心提供服务的方法。
如图 6所示, 本发明实施例提供的数据中心提供服务的方法, 包括如下步骤: S201 : 客户端通过核心网络设备向第一负载均衡设备发送第一类网络请求。
S202: 第一负载均衡设备将第一类网络请求进行源地址和目的地址转换, 并釆用第 一调度策略向多个第二负载均衡设备中的一个转发源地址和目的地址转换后的第一类 网络请求。
具体而言, 第一负载均衡设备设置其自身地址作为第一类网络请求的源地址, 记为 第一源地址, 并设置通过第一调度策略选择的第二负载均衡设备作为第一类网络请求的 目的地址, 记为第一目的地址。 由此, 当第二负载均衡设备有数据包需要向第一负载均 衡设备反馈时, 可以根据本次设置的第一源地址, 将数据包反馈给相应的第一负载均衡 设备。此外,第一负载均衡设备在进行源地址和目的地址转换时保存原来的第一源地址, 从而可以将数据包根据上述第一源地址反馈给相应的客户端。
在本发明的一个实施例中, 步骤 S202还包括由第一负载均衡设备对第一类网络请 求进行源端口和目的端口的转换。 与第一负载均衡设备将第一类网络请求进行源地址和 目的地址的转换相类似的, 第一负载均衡设备设置其自身端口作为第一类网络请求的源 端口, 记为第一源端口, 并设置通过第一调度策略选择的第二负载均衡设备作为第一类 网络请求的目的端口, 记为第一目的端口。 由此, 在第二负载均衡设备有数据包需要向 第一负载均衡设备反馈时, 可以根据本次设置的源端口, 将数据包反馈给相应的第一负 载均衡设备。 此外, 第一负载均衡设备在进行源端口和目的端口转换时保存原来的第一 源端口, 从而可以将数据包根据上述第一源端口反馈给相应的客户端。
第一负载均衡设备釆用第一调度策略将上述经过源地址和目的地址转换后的第一 类网络请求转发给与第一目的地址对应的第二负载均衡设备。 在本发明的一个实施例 中, 第一调度策略包括轮询方式、 五元组哈希策略或源地址哈希策略。 可以理解的是, 第一调度策略的方式不限于此, 上述对第一调度策略的举例仅是出于示例的目的, 而不 是为了限制本发明的保护范围。
S203 : 第二负载均衡设备将由第一负载均衡设备转发的经过源地址和目的地址转换 的第一类网络请求转换为第二类网络请求, 对第二类网络请求进行源地址和目的地址的 转换, 根据第二调度策略向多个服务器中的一个转发所述第二类网络请求。
在本步骤中, 第二负载均衡设备釆用 SOCKET 方式将第一类网络请求转换为第二 类网络请求。 然后, 对第二类网络请求进行源地址和目的地址的转换, 釆用第二调度策 略将源地址和目的地址的转换的第二类网络请求转换给多个服务器中的一个。
具体而言, 第二负载均衡设备设置其自身地址作为第二类网络请求的源地址, 记为 第二源地址, 并设置通过第二调度策略选择的服务器作为第二类网络请求的目的地址, 记为第二目的地址。 可以理解的是, 第二源地址和第一目的地址相同。 由此, 在服务器 向第一负载均衡设备反馈数据包时, 可以根据本次设置的第二源地址, 将数据包反馈给 相应的第二负载均衡设备。 此外, 第二负载均衡设备在进行源地址和目的地址转换时保 存原来的第二源地址, 从而可以将由服务器反馈的数据根据上述第二源地址反馈给相应 的第一负载均衡设备。
在本发明的一个实施例中, 步骤 203还包括由第二负载均衡设备设置其自身端口作 为第二类网络请求的源端口, 记为第二源端口, 换言之, 第二源端口为第一目的端口。 并且, 第二负载均衡设备设置通过第二调度策略选择的服务器作为第二类网络请求的目 的端口,记为第二目的端口。 由此,在服务器有数据包需要向第二负载均衡设备反馈时, 可以根据本次设置的第二源端口, 将数据包反馈给相应的第二负载均衡设备。 此外, 第 二负载均衡设备在进行源端口和目的端口转换时保存原来的第二源端口,从而可以将由 服务器反馈的数据根据上述第二源端口反馈给相应的第一负载均衡设备。
第二负载均衡设备釆用第二调度策略将上述经过源地址和目的地址转换后的第二 类网络请求转发给与目的地址对应的服务器。 在本发明的一个实施例中, 第二调度策略 包括轮询方式、 URL调度策略、 URL哈希调度策略或一致性哈希调度策略。 可以理解 的是,第二调度策略的方式不限于此,上述对第二调度策略的举例仅是出于示例的目的, 而不是为了限制本发明的保护范围。
S204: 服务器接收由第二负载均衡设备转发的第二类网络请求, 并根据第二类网络 请求生成第二类网络响应, 并将第二类网络响应返回给相应的第二负载均衡设备。
S205: 第二负载均衡设备将第二类网络响应转换为第一类网络响应, 并将第一类网 络响应返回至相应的第一负载均衡设备, 并由第一负载均衡设备将第一类网络响应返回 给相应的客户端。
在本发明的一个实施中, 第二负载均衡设备釆用 SOCKET 方式将第二类网络响应 转换为第一类网络响应。 第二负载均衡设备根据步骤 203中设置的第二源地址将第一类 网络响应返回给相应的第一负载均衡设备, 并由第一负载均衡设备根据第一源地址将上 述第一类网络响应返回给相应的客户端。
可以理解的是, 第一类网络请求和第一类网络响应属于同一个网络类型, 第二类网 络请求和第二类网络响应属于同一个网络类型。
在本发明的一个实施例中, 第一类网络可以为 IPv6 网络, 第二类网络可以为 IPv4 网络。 相应地, 第一类网络请求为 IPv6请求, 第二类网络请求为 IPv4请求。 第一类网 络响应为 IPv6响应, 第二类网络响应为 IPv4响应。 在本发明的一个实施例中, 第一负载均衡设备可以为四层负载均衡设备, 第二负载 均衡设备可以为七层负载均衡设备。 其中, 第一负载均衡设备和第二负载均衡设备均可 以为多个。 其中, 多个第一负载均衡设备可以釆用主备冗余模式或集群模式协同工作。 多个第二负载均衡设备也可以釆用主备冗余模式或集群模式协同工作。 由此, 当一个第 一负载均衡设备或第二负载均衡设备出现故障时, 不会影响到整个数据中心***的工 作, 从而提高了整个***运行的安全性。
根据本发明实施例的数据中心提供服务的方法, 可以在不改变现有的 IDC 网络结 构、 大规模***和应用程序升级的前提下, 筒单透明地实现不同类网络之间提供服务。
下面参考图 7描述根据本发明实施例的四层负载均衡设备 300。
如图 7所示, 本发明实施例提供的四层负载均衡设备 300包括第一传输模块 310、 第二源目转换模块 320和第一负载均衡模块 330。
第一传输模块 310与核心网络设备相连, 用于接收来自客户端通过核心网络设备发 送的第一类网络请求, 例如 IPv6请求。 第一源目转换模块 320用于将第一类网络请求 进行源地址和目的地址转换。 第一负载均衡模块 330与多个第二负载均衡设备(及七层 负载均衡设备) 中的一个相连, 用于釆用第一调度策略将所述源地址和目的地址转换后 的第一类网络请求转发给多个第二负载均衡设备中的一个。
根据本发明实施例的四层负载均衡设备, 可以根据调度策略对后端的负载均衡设备 进行流量分配, 从而可以扩展网络设备和服务器的带宽、 增加吞吐量、 加强网络数据处 理能力、 并且提高网络的灵活性和可用性。
在本发明的一个实施例中, 第二负载均衡设备可以为七层负载均衡设备。
第一源目转换模块 320将由客户端通过核心网络设备发送第一类网络请求进行源地 址和目的地址转换。 具体而言, 第一源目转换模块 320设置四层负载均衡设备 300的地 址作为第一类网络请求的源地址, 记为第一源地址, 并设置通过第一调度策略选择的第 二负载均衡设备作为第一类网络请求的目的地址, 记为第一目的地址。 由此, 当第二负 载均衡设备有数据包需要向第一负载均衡设备反馈时, 可以根据本次设置的第一源地 址, 将数据包反馈给相应的四层负载均衡设备 300。 此外, 四层负载均衡设备 300在进 行源地址和目的地址转换时保存原来的第一源地址, 从而可以由第一传输模块 310将数 据包根据上述第一源地址反馈给相应的客户端。
在本发明的一个实施例中, 第一源目转换模块 320还用于对第一类网络请求进行源 端口和目的端口的转换。 具体而言, 第一源目转换模块 320设置四层负载均衡设备 300 的端口作为第一类网络请求的源端口, 记为第一源端口, 并设置通过第一调度策略选择 的第二负载均衡设备作为第一类网络请求的目的端口, 记为第一目的端口。 由此, 在第 二负载均衡设备有数据包需要向四层负载均衡设备 300反馈时, 可以根据本次设置的源 端口, 将数据包反馈给相应的四层负载均衡设备 300。 例如, 将第二负载均衡设备通过 四层负载均衡设备 300返回的第一类网络响应进一步返回给相应的客户端。 此外, 四层 负载均衡设备 300在进行源端口和目的端口转换时保存原来的第一源端口, 从而由第一 传输模块 310可以将数据包根据上述第一源端口反馈给相应的客户端。
四层负载均衡设备 300基于 TCP/IP协议对后端的多个第二负载均衡设备进行流量 调度, 通过釆用第一调度策略将上述经过源地址和目的地址转换后的第一类网络请求转 发给与第一目的地址对应的第二负载均衡设备。 在本发明的一个实施例中, 第一调度策 略包括轮询方式、 五元组哈希策略或源地址哈希策略。 可以理解的是, 第一调度策略的 方式不限于此, 上述对第一调度策略的举例仅是出于示例的目的, 而不是为了限制本发 明的保护范围。
在本发明的一个实施例中, 本发明实施例的四层负载均衡设备 300还包括第一防御 模块和第一后端检查模块。
第一防御模块用于防御对四层负载均衡设备 300的攻击。 具体而言, 第一防御模块 具有四层 DDoS ( Distributed Denial of service , 分布式拒绝服务攻击) 防御功能, 主要 防御传输层上的服务攻击, 例如针对 SYN/ACK等标志位攻击。 具体而言, 第一防御模 块可以提供针对 SYN FLOOD攻击、 ACK STORM等攻击的防御措施。
第一后端检查模块用于检查四层负载均衡设备 300 的当前服务状态和当前设备状 态, 并在四层负载均衡设备 300发生故障时, 对故障进行自动处理。 具体而言, 当四层 负载均衡设备 300在运行过程中, 一项服务或一个机器出现故障时, 第一后端检查模块 自动检测出该故障,并对该故障进行处理,从而保障四层负载均衡设备 300的正常运行, 不会因为某项服务或机器的故障而影响整个设备的运转。
根据本发明实施例的四层负载均衡设备, 可以根据调度策略对后端的负载均衡设备 进行流量分配, 从而可以扩展网络设备和服务器的带宽、 增加吞吐量、 加强网络数据处 理能力、 并且提高网络的灵活性和可用性。
下面参考图 8描述根据本发明实施例的七层负载均衡设备 400。
如图 8所示, 本发明实施例提供的七层负载均衡设备 400包括第二传输模块 410、 网络转换模块 420、 第二源目转换模块 430和第二负载均衡模块 440。
第二传输模块 410与多个第一负载均衡设备(即四层负载均衡设备)中的一个相连, 用于接收来自第一负载均衡设备的经过源地址和目的地址转换的第一类网络请求, 例 如, IPv6请求。 网络转换模块 420用于将经过源地址和目的地址转换的第一类网络请求 转换为第二类网络请求, 例如, IPv4请求。 第二源目转换模块 430用于将第二类网络请 求进行源地址和目的地址转换。 第二负载均衡模块 440与多个服务器中的一个相连, 用 于釆用第二调度策略将经过源地址和目的地址转换的第二类网络请求转发给多个服务 器中的一个。
根据本发明实施例的七层负载均衡设备, 可以将第一类网络请求转换为第二类网络 请求以利于不同类型网络之间的服务提供, 并且根据调度策略对后端的服务器进行流量 分配, 从而可以扩展网络设备和服务器的带宽、 增加吞吐量、 加强网络数据处理能力、 并且提高网络的灵活性和可用性。
在本发明的一个实施例中, 第一负载均衡设备可以为四层负载均衡设备。
在本发明的又一个实施例中, 网络转换模块 420可以通过 SOCKET方式将经过源 地址和目的地址转换的第一类网络请求转换为第二类网络请求。
第二源目转换模块 430将网络转换模块 420发送的第二类网络请求进行源地址和目 的地址的转换。 具体而言, 第二源目转换模块 430设置七层负载均衡设备 400的地址作 为第二类网络请求的源地址, 记为第二源地址, 并设置通过第二调度策略选择的服务器 作为第二类网络请求的目的地址, 记为第二目的地址。 由此, 当服务器有数据包需要向 七层负载均衡设备 400反馈时, 可以根据本次设置的第二源地址, 将数据包反馈给相应 的七层负载均衡设备 400。 此外, 七层负载均衡设备 400在进行源地址和目的地址转换 时保存原来的第二源地址, 从而可以由第二传输模块 310将数据包根据上述第二源地址 反馈给相应的第一负载均衡设备。
在本发明的一个实施例中, 第二源目转换模块 430还用于对第二类网络请求进行源 端口和目的端口的转换。 具体而言, 第二源目转换模块 430设置七层负载均衡设备 400 的端口作为第二类网络请求的源端口, 记为第二源端口, 并设置通过第二调度策略选择 的服务器作为第二类网络请求的目的端口, 记为第二目的端口。 由此, 在服务器有数据 包需要向七层负载均衡设备 400反馈时, 可以根据本次设置的源端口, 将数据包反馈给 相应的七层负载均衡设备 400。 此外, 七层负载均衡设备 400在进行源端口和目的端口 转换时保存原来的第二源端口, 从而由第二传输模块 410可以将数据包根据上述第二源 端口反馈给相应的第一负载均衡设备。
七层负载均衡设备 400基于 URL等特征对后端的多个服务器进行流量调度, 通过 釆用第二调度策略将上述经过源地址和目的地址转换后的第二类网络请求转发给与第 二目的地址对应的服务器。在本发明的一个实施例中,第二调度策略包括轮询方式、 URL 调度策略、 URL哈希调度策略或一致性哈希调度策略。 可以理解的是, 第二调度策略的 方式不限于此, 上述对第二调度策略的举例仅是出于示例的目的, 而不是为了限制本发 明的保护范围。
服务器将七层负载均衡设备 400发送的第二类网络请求进行响应, 并生成相应的第 二类网络响应。服务器将上述第二类网络响应根据第二源地址返回给相应的七层负载均 衡设备 400。 网络转换模块 420还用于将上述服务器返回的第二类网络响应转换为第一 类网络响应,并由第二传输模块 410将第一类网络响应返回给相应的第一负载均衡设备。
在本发明的一个实施例中, 本发明实施例的七层负载均衡设备 400还包括第二防御 模块和第二后端检查模块。
第二防御模块用于防御对七层负载均衡设备 400的攻击。 具体而言, 第二防御模块 具有七层 DDoS防御功能, 主要防御应用层上的服务攻击。 具体而言, 第二防御模块可 以提供针对 URL阈值封禁、 IP阈值封禁等攻击的防御措施。
第二后端检查模块用于检查七层负载均衡设备 400 的当前服务状态和当前设备状 态, 并在七层负载均衡设备 400发生故障时, 对故障进行自动处理。 具体而言, 当七层 负载均衡设备 400在运行过程中, 一项服务或一个机器出现故障时, 第二后端检查模块 自动检测出该故障,并对该故障进行处理,从而保障七层负载均衡设备 400的正常运行, 不会因为某项服务或机器的故障而影响整个设备的运转。
根据本发明实施例的七层负载均衡设备, 可以将第一类网络请求转换为第二类网络 请求以利于不同类型网络之间的服务提供, 并且根据调度策略对后端的服务器进行流量 分配, 从而可以扩展网络设备和服务器的带宽、 增加吞吐量、 加强网络数据处理能力、 并且提高网络的灵活性和可用性。
下面参考图 9描述根据本发明实施例的数据中心***的演进部署方法。 其中, 数据 中心***可以为本发明上述实施例提供的数据中心*** 100。
如图 9所示, 本发明实施例提供的数据中心***的演进部署方法包括如下步骤:
S501 : 检测当前网络的第一类网络的流量和第二类网络的流量的分布状态。
S 502:根据第一类网络的流量和第二类网络的流量的分布状态对网络中的第一负载 均衡设备和第二负载均衡设备进行部署。
如果第一类网络的流量和所述第二类网络的流量的比例低于第一阈值, 则在网络中 同时部署第一负载均衡设备和第二负载均衡设备, 由第一负载均衡设备将第一类网络的 流量分配并转发给后端的第二负载均衡设备, 否则在网络中仅部署所述第一负载均衡设 备, 将所述第一类网络的流量分配并转发给后端的服务器。 根据本发明实施例的演进部署方法, 在网络的不同发展阶段, 部署相应类型负载均 衡设备并且适时去除有可能造成网络流量瓶颈的负载均衡设备,从而满足了不同发展阶 段的网络性能需求, 具有较强的灵活性。
在本发明的一个实施例中, 第一类网络可以为 IPv6网络, 第二类网络为 IPv4网络。 下面以 IPv6网络和 IPv4网络为例对演进部署方法进行详细说明。
首先检测当前网络中的 IPv6网络流量和 IPv4网络流量,然后才 据 IPv6网络流量和 IPv4网络流量的分布转帖对网络中的第一负载均衡设备和第二负载均衡设备进行部署。 在本发明的一个实施例中, 第一负载均衡设备为四层负载均衡设备, 第二负载均衡设备 为七层负载均衡设备。
具体而言, 如果 IPv6网络的流量和 IPv4网络流量的比例低于第一阈值, 即当前网 络处于 IPv6 网络的初期, IPv6 网络流量较小, 此时在网络中同时部署四层负载均衡设 备和七层负载均衡设备。 其中, 七层负载均衡设备釆用双栈协议, 即七层负载均衡设备 釆用 IPv4协议栈和 IPv6协议栈。四层负载均衡设备将 IPv4请求直接转发给后端的业务 服务器, 而将 IPv6请求根据流量分配并转发给后端的七层负载均衡设备。 例如, 四层 负载均衡设备釆用第一调度策略将流量分配并转发给后端的七层负载均衡设备。 其中, 第一调度策略可以包括轮询方式、 五元组哈希策略或源地址哈希策略。 七层负载均衡设 备对来自四层负载均衡设备的 IPv6请求转换为 IPv4请求,并将 IPv4请求发送给后端的 业务服务器, 由后端的业务服务器对该 IPv4请求进行响应。
如果 IPv6网络的流量和 IPv4网络的流量的比例位于第一阈值和第二阈值之间或者 高于第二阈值, 则在网络中仅部署四层负载均衡设备, 将 IPv6 的流量分配并转发给后 端的业务服务器。 具体而言, 当 IPv6网络的流量和 IPv4网络的流量的比例位于第一阈 值和第二阈值之间时, 当前网络处于 IPv6网络的中期, IPv6网络流量和 IPv4网络流量 接近, 此时在网络中仅部署四层负载均衡设备, 从而避免七层负载均衡设备成为网络流 量瓶颈。 将业务前端的服务器设置为双栈服务器, 即釆用双协议栈, 包括用 IPv4 协议 栈和 IPv6 协议栈, 从而可以直接接入并作为四层负载均衡设备的后端服务器。 四层负 载均衡设备可以将流量分配并转发给后端的服务器。
如果 IPv6网络的流量和 IPv4网络的流量的比例高于第二阈值时, 则在网络中仅部 署四层负载均衡设备, 将 IPv6 的流量分配并转发给后端的业务服务器。 具体而言, 当 IPv6网络的流量和 IPv4网络的流量的比例高于第二阈值时, 当前网络处于 IPv6网络的 后期, 互联网中有少部 IPv4 网络流量。 将业务前端的服务器设置为双栈服务器, 即釆 用双协议栈, 包括用 IPv4协议栈和 IPv6协议栈, 从而可以直接接入并作为四层负载均 衡设备的后端服务器。 四层负载均衡设备可以将流量分配并转发给后端的服务器。 如果当前网络的 IPv4网络流量为 0 , 则表示 IPv4/IPv6的过渡期结束, 当前网络为 IPv6 网络, 服务器釆用 IPv6协议栈, 从而可以直接接入并作为四层负载均衡设备的后 端服务器。 四层负载均衡设备可以将流量分配并转发给后端的服务器。
根据本发明实施例的演进部署方法, 在网络的不同发展阶段, 部署相应类型负载均 衡设备并且适时去除有可能造成网络流量瓶颈的负载均衡设备,从而满足了不同发展阶 段的网络性能需求, 具有较强的灵活性。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤 是可以通过程序来指令相关的硬件完成, 所述的程序可以存储于一种计算机可读存储介 盾中, 该程序在执行时, 包括方法实施例的步骤之一或其组合。
此外, 在本发明各个实施例中的各功能单元可以集成在一个处理模块中, 也可以是 各个单元单独物理存在, 也可以两个或两个以上单元集成在一个模块中。 上述集成的模 块既可以釆用硬件的形式实现, 也可以釆用软件功能模块的形式实现。 所述集成的模块 如果以软件功能模块的形式实现并作为独立的产品销售或使用时, 也可以存储在一个计 算机可读取存储介盾中。
上述提到的存储介盾可以是只读存储器, 磁盘或光盘等。
在本说明书的描述中, 参考术语 "一个实施例" 、 "一些实施例" 、 "示例" 、 "具体示例" 、 或 "一些示例" 等的描述意指结合该实施例或示例描述的具体特征、 结 构、 材料或者特点包含于本发明的至少一个实施例或示例中。 在本说明书中, 对上述术 语的示意性表述不一定指的是相同的实施例或示例。 而且, 描述的具体特征、 结构、 材 料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。
尽管已经示出和描述了本发明的实施例, 对于本领域的普通技术人员而言, 可以理 解在不脱离本发明的原理和精神的情况下可以对这些实施例进行多种变化、 修改、 替换 和变型, 本发明的范围由所附权利要求及其等同限定。

Claims

权利要求书
1、 一种数据中心***, 其特征在于, 包括至少一个第一负载均衡设备, 多个第二 负载均衡设备, 及多个服务器, 且所述第一负载均衡设备与核心网络设备相连, 所述多 个第二负载均衡设备的每一个均与所述第一负载均衡设备相连, 及所述多个服务器的每 一个均与所述多个第二负载均衡设备相连, 其中:
所述第一负载均衡设备, 用于接收来自客户端通过所述核心网络设备发送的第一类 网络请求, 并釆用第一调度策略向所述多个第二负载均衡设备中的一个转发所述第一类 网络请求;
所述多个第二负载均衡设备, 用于接收来自所述第一负载均衡设备转发的第一类网 络请求, 并将所述第一类网络请求转换为第二类网络请求, 以及对所述第二类网络请求 进行源地址和目的地址的转换, 和根据第二调度策略向所述多个服务器中的一个转发所 述经过源地址和目的地址转换的第二类网络请求; 和
所述多个服务器, 用于接收来自所述第二负载均衡设备的所述第二类网络请求, 并 根据所述第二类网络请求生成第二类网络响应, 以及将所述第二类网络响应返回给相应 的第二负载均衡设备。
2、 如权利要求 1 所述的数据中心***, 其特征在于, 所述第一负载均衡设备为多 个, 其中, 所述多个第二负载均衡设备的每一个均与所述多个第一负载均衡设备相连。
3、 如权利要求 1或 2所述的数据中心***, 其特征在于, 所述第二负载均衡设备 还用于将所述服务器返回的所述第二类网络响应转换为第一类网络响应, 并将所述第一 类网络响应返回至相应的第一负载均衡设备, 并由所述相应的第一负载均衡设备将所述 第一类网络响应返回给相应的客户端。
4、 如权利要求 1-3 任一项所述的数据中心***, 其特征在于, 所述第一类网络为 IPv6网络, 所述第二类网络为 IPv4网络。
5、 如权利要求 1 所述的数据中心***, 其特征在于, 所述第一负载均衡设备为四 层负载均衡设备, 所述第二负载均衡设备为七层负载均衡设备。
6、 如权利要求 1 所述的数据中心***, 其特征在于, 所述第一负载均衡设备还用 于对所述第一类网络请求进行源端口和目的端口转换; 所述第二负载均衡设备还用于对 所述第二类网络请求进行源端口和目的端口的转换。
7、 如权利要求 1 所述的数据中心***, 其特征在于, 所述第二负载均衡设备釆用 SOCKET方式在所述第一类网络请求和所述第二类网络请求之间,及所述第一类网络响 应和所述第二类网络响应之间进行转换。
8、 如权利要求 1 所述的数据中心***, 其特征在于, 所述第一调度策略包括轮询 方式、 五元组哈希策略或源地址哈希策略; 所述第二调度策略包括轮询方式、 统一资源 定位 URL调度策略、 URL哈希调度策略或一致性哈希调度策略。
9、 如权利要求 2所述的数据中心***, 其特征在于, 所述多个第一负载均衡设备 和所述多个第二负载均衡设备釆用主备冗余模式或集群模式协同工作。
10、 一种数据中心提供服务的方法, 其特征在于, 包括如下步骤:
客户端通过核心网络设备向第一负载均衡设备发送第一类网络请求;
所述第一负载均衡设备釆用第一调度策略向多个第二负载均衡设备中的一个转发 所述第一类网络请求;
所述第二负载均衡设备将由所述第一负载均衡设备转发的所述第一类网络请求转 换为第二类网络请求, 对所述第二类网络请求进行源地址和目的地址的转换, 并根据第 二调度策略向多个服务器中的一个转发所述第二类网络请求; 和
所述服务器接收由所述第二负载均衡设备转发的所述第二类网络请求, 并根据所述 第二类网络请求生成第二类网络响应。
11、 如权利要求 10所述的数据中心提供服务的方法, 其特征在于, 还包括: 所述服务器将所述第二类网络响应返回给相应的第二负载均衡设备; 和
所述第二负载均衡设备将所述第二类网络响应转换为第一类网络响应, 并将所述第 一类网络响应返回至相应的第一负载均衡设备, 并由所述第一负载均衡设备将所述第一 类网络响应返回给相应的客户端。
12、 如权利要求 10或 1 1所述的数据中心提供服务的方法, 其特征在于, 所述第一 类网络为 IPv6网络, 所述第二类网络为 IPv4网络。
13、 如权利要求 10-12任一项所述的数据中心提供服务的方法, 其特征在于, 所述 第一负载均衡设备为四层负载均衡设备, 所述第二负载均衡设备为七层负载均衡设备。
14、 如权利要求 10所述的数据中心提供服务的方法, 其特征在于, 还包括: 所述第一负载均衡设备对所述第一类网络请求进行源端口和目的端口转换; 所述第 二负载均衡设备对所述第二网络请求进行源端口和目的端口转换。
15、 如权利要求 11 所述的数据中心提供服务的方法, 其特征在于, 第二负载均衡 设备釆用 SOCKET 方式在所述第一类网络请求和所述第二类网络请求之间, 及所述第 一类网络响应和所述第二类网络响应之间进行转换。
16、 如权利要求 10所述的数据中心提供服务的方法, 其特征在于, 所述第一调度 策略包括轮询方式、 五元组哈希策略或源地址哈希策略; 所述第二调度策略包括轮询方 式、 统一资源定位 URL调度策略、 URL哈希调度策略或一致性哈希调度策略。
17、 如权利要求 10所述的数据中心提供服务的方法, 其特征在于, 多个所述四层 负载均衡设备和多个所述七层负载均衡设备釆用主备冗余模式或集群模式协同工作。
18、 一种四层负载均衡设备, 其特征在于, 包括:
第一传输模块, 所述第一传输模块与核心网络设备相连, 用于接收来自客户端通过 所述核心网络设备发送的第一类网络请求;
第一源目转换模块, 用于将所述第一类网络请求进行源地址和目的地址转换; 和 第一负载均衡模块, 所述第一负载均衡模块与多个七层负载均衡设备中的一个相 连, 用于釆用第一调度策略将所述源地址和目的地址转换后的第一类网络请求转发给与 所述四层负载均衡设备相连的多个七层负载均衡设备中的一个。
19、 如权利要求 18所述的四层负载均衡设备, 其特征在于, 所述第一传输模块还 用于将相应的七层负载均衡设备返回的第一类网络响应进一步返回给相应的客户端。
20、 如权利要求 18所述的四层负载均衡设备, 其特征在于, 所述第一调度策略包 括轮询方式、 五元组哈希策略或源地址哈希策略。
21、 如权利要求 18所述的四层负载均衡设备, 其特征在于, 还包括:
第一防御模块, 用于防御对所述四层负载均衡设备的攻击; 和
第一后端检查模块, 用于检查所述四层负载均衡设备的当前服务状态和当前设备状 态, 并在所述四层负载均衡设备发生故障时, 对所述故障进行自动处理。
22、 一种七层负载均衡设备, 其特征在于, 包括:
第二传输模块, 所述第二传输模块与多个四层负载均衡设备中的一个相连, 用于接 收来自四层负载均衡设备的第一类网络请求;
网络转换模块, 用于将所述四层类网络请求转换为第二类网络请求;
第二源目转换模块, 用于将所述第二类网络请求进行源地址和目的地址转换; 和 第二负载均衡模块, 所述第二负载均衡模块与多个服务器中的一个相连, 用于釆用 第二调度策略将所述经过源地址和目的地址转换的第二类网络请求转发给所述多个服 务器中的一个。
23、 如权利要求 22所述的七层负载均衡设备, 其特征在于, 所述网络转换模块还 用于将所述由所述服务器返回的第二类网络响应转换为第一类网络响应, 并由所述第二 传输模块将所述第一类网络响应返回给相应的四层负载均衡设备, 其中, 所述第二类网 络响应为所述服务器根据来自所述七层负载均衡模块的第二类网络请求生成的第二类 网络响应。
24、 如权利要求 22所述的七层负载均衡设备, 其特征在于, 所述第二调度策略包 括轮询方式、 统一资源定位 URL调度策略、 URL哈希调度策略或一致性哈希调度策略。
25、 如权利要求 22所述的七层负载均衡设备, 其特征在于, 还包括:
第二防御模块, 用于防御对所述七层负载均衡设备的攻击; 和
第二后端检查模块, 用于检查所述七层负载均衡设备的当前服务状态, 并在所述七 层负载均衡设备发生故障时, 对所述故障进行自动处理。
26、 一种基于权利要求 1-9任一项所述的数据中心***的演进部署方法, 其特征在 于, 包括如下步骤:
检测当前网络的第一类网络的流量和第二类网络的流量的分布状态, 并根据所述第 一类网络的流量和所述第二类网络的流量的分布状态对网络中的第一负载均衡设备和 第二负载均衡设备进行部署, 其中,
如果所述第一类网络的流量和所述第二类网络的流量的比例低于第一阈值, 则在网 络中同时部署所述第一负载均衡设备和第二负载均衡设备, 由第一负载均衡设备将所述 第一类网络的流量分配并转发给后端的第二负载均衡设备; 否则在网络中仅部署所述第 一负载均衡设备, 将所述第一类网络的流量分配并转发给后端的服务器。
27、 如权利要求 26所述的数据中心***的演进部署方法, 其特征在于, 如果所述 第一类网络的流量和所述第二类网络的流量的比例低于所述第一阈值, 则所述第二负载 均衡设备釆用双协议栈, 所述双协议栈包括 IPv4协议栈和 IPv6协议栈。
28、 如权利要求 26所述的数据中心***的演进部署方法, 其特征在于, 如果所述 第一类网络的流量和所述第二类网络的流量的比例位于所述第一阈值和第二阈值之间 或者高于所述第二阈值时, 则所述服务器釆用双协议栈, 所述双协议栈包括 IPv4 协议 栈和 IPv6协议栈。
29、 如权利要求 26所述的数据中心***的演进部署方法, 其特征在于, 如果所述 当前网络的第二类网络流量为 0, 则所述服务器釆用 IPv6协议栈。
PCT/CN2012/078773 2011-09-23 2012-07-17 数据中心***及装置和提供服务的方法 WO2013040942A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP12833678.1A EP2765747B1 (en) 2011-09-23 2012-07-17 Data centre system and apparatus, and method for providing service
US14/346,653 US8966050B2 (en) 2011-09-23 2012-07-17 Data centre system and method for a data centre to provide service

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201110286977.0A CN103023797B (zh) 2011-09-23 2011-09-23 数据中心***及装置和提供服务的方法
CN201110286977.0 2011-09-23

Publications (1)

Publication Number Publication Date
WO2013040942A1 true WO2013040942A1 (zh) 2013-03-28

Family

ID=47913853

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2012/078773 WO2013040942A1 (zh) 2011-09-23 2012-07-17 数据中心***及装置和提供服务的方法

Country Status (4)

Country Link
US (1) US8966050B2 (zh)
EP (1) EP2765747B1 (zh)
CN (1) CN103023797B (zh)
WO (1) WO2013040942A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104579729A (zh) * 2013-10-17 2015-04-29 华为技术有限公司 Cgn单板故障的通知方法及装置
CN105306553A (zh) * 2015-09-30 2016-02-03 北京奇艺世纪科技有限公司 访问请求调度方法及装置
US9391716B2 (en) 2010-04-05 2016-07-12 Microsoft Technology Licensing, Llc Data center using wireless communication
US9497039B2 (en) 2009-05-28 2016-11-15 Microsoft Technology Licensing, Llc Agile data center network architecture
US9954751B2 (en) 2015-05-29 2018-04-24 Microsoft Technology Licensing, Llc Measuring performance of a network using mirrored probe packets

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103004173B (zh) * 2012-09-29 2016-03-09 华为技术有限公司 一种分配地址的方法、装置及***
US10069903B2 (en) 2013-04-16 2018-09-04 Amazon Technologies, Inc. Distributed load balancer
US9509614B2 (en) * 2013-06-20 2016-11-29 Cisco Technology, Inc. Hierarchical load balancing in a network environment
CN103944831A (zh) * 2014-04-29 2014-07-23 中国科学院声学研究所 负载均衡方法及***
CN104539645A (zh) * 2014-11-28 2015-04-22 百度在线网络技术(北京)有限公司 一种用于处理http请求的方法与设备
CN104601340B (zh) * 2014-12-02 2018-10-09 腾讯科技(深圳)有限公司 一种跨idc的数据传输方法、设备及***
CN104618379B (zh) * 2015-02-04 2019-06-04 北京天地互连信息技术有限公司 一种面向idc业务场景的安全服务编排方法及网络结构
US9838302B1 (en) 2015-06-10 2017-12-05 Amazon Technologies, Inc. Managing loss of network connectivity in traffic forwarding systems
US10237157B1 (en) * 2015-06-10 2019-03-19 Amazon Technologies, Inc. Managing host failures in a traffic forwarding system
CN107231221B (zh) * 2016-03-25 2020-10-23 阿里巴巴集团控股有限公司 数据中心间的业务流量控制方法、装置及***
US10574741B2 (en) 2016-04-18 2020-02-25 Nokia Technologies Oy Multi-level load balancing
CN106131204A (zh) * 2016-07-22 2016-11-16 无锡华云数据技术服务有限公司 应用于负载均衡***的报文快速分发方法及其***
CN108347465B (zh) * 2017-01-23 2021-02-02 阿里巴巴集团控股有限公司 一种选择网络数据中心的方法及装置
CN108540397A (zh) * 2017-03-02 2018-09-14 华为技术有限公司 网络业务处理方法和负荷分担装置
CN109842641A (zh) * 2017-11-24 2019-06-04 深圳市科比特航空科技有限公司 一种无人机数据传输方法、装置及***
US11606418B2 (en) 2018-08-03 2023-03-14 Samsung Electronics Co., Ltd. Apparatus and method for establishing connection and CLAT aware affinity (CAA)-based scheduling in multi-core processor
CN109218219A (zh) * 2018-10-15 2019-01-15 迈普通信技术股份有限公司 一种负载均衡方法、装置、网络设备及存储介质
CN109547354B (zh) * 2018-11-21 2022-08-30 广州市百果园信息技术有限公司 负载均衡方法、装置、***、核心层交换机及存储介质
CN109618003B (zh) * 2019-01-14 2022-02-22 网宿科技股份有限公司 一种服务器规划方法、服务器及存储介质
CN109995881B (zh) * 2019-04-30 2021-12-14 网易(杭州)网络有限公司 缓存服务器的负载均衡方法和装置
CN110213114B (zh) * 2019-06-21 2024-04-09 深圳前海微众银行股份有限公司 去中心化的网络服务方法、装置、设备及可读存储介质
CN110601989A (zh) * 2019-09-24 2019-12-20 锐捷网络股份有限公司 一种网络流量均衡方法及装置
CN112751897B (zh) * 2019-10-31 2022-08-26 贵州白山云科技股份有限公司 负载均衡方法、装置、介质及设备
CN111314414B (zh) * 2019-12-17 2021-09-28 聚好看科技股份有限公司 数据传输方法、装置及***
CN111064809B (zh) * 2019-12-31 2022-05-24 中国工商银行股份有限公司 应用于网络隔离区的负载均衡方法和***
CN111343295B (zh) * 2020-02-18 2022-09-27 支付宝(杭州)信息技术有限公司 用于确定IPv6地址的风险的方法及装置
CN111988423A (zh) * 2020-08-31 2020-11-24 浪潮云信息技术股份公司 一种基于Nginx的网络四层与七层间的负载均衡方法及***
CN113014692A (zh) * 2021-03-16 2021-06-22 腾讯科技(深圳)有限公司 一种网络地址转换方法、装置、设备及存储介质
CN113542449A (zh) * 2021-07-13 2021-10-22 中国工商银行股份有限公司 一种域名解析方法、***、计算机设备及可读存储介质
CN113608877B (zh) * 2021-08-13 2023-11-10 牙木科技股份有限公司 一种内容商ipv4和ipv6资源池负载均衡调度方法
CN113691460B (zh) * 2021-08-26 2023-10-03 平安科技(深圳)有限公司 基于负载均衡的数据传输方法、装置、设备及存储介质
CN113890879B (zh) * 2021-09-10 2023-12-29 鸬鹚科技(深圳)有限公司 数据访问的负载均衡方法、装置、计算机设备及介质
US20230123734A1 (en) * 2021-10-20 2023-04-20 Google Llc Proxy-Less Private Connectivity Across VPC Networks With Overlapping Addresses
CN116743573B (zh) * 2023-08-15 2023-11-03 中移(苏州)软件技术有限公司 一种将K8s由IPv4切换为IPv6/IPv4双栈的方法、装置及相关设备

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101834831A (zh) * 2009-03-13 2010-09-15 华为技术有限公司 一种实现nat设备冗余备份的方法、装置和***
CN102075921A (zh) * 2009-11-24 2011-05-25 ***通信集团公司 一种网络间通信的方法和装置
US20110153831A1 (en) * 2009-12-23 2011-06-23 Rishi Mutnuru Systems and methods for mixed mode of ipv6 and ipv4 dns of global server load balancing

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9444785B2 (en) * 2000-06-23 2016-09-13 Cloudshield Technologies, Inc. Transparent provisioning of network access to an application
US20050183140A1 (en) * 2003-11-20 2005-08-18 Goddard Stephen M. Hierarchical firewall load balancing and L4/L7 dispatching
US7474661B2 (en) * 2004-03-26 2009-01-06 Samsung Electronics Co., Ltd. Apparatus and method for distributing forwarding table lookup operations among a plurality of microengines in a high-speed routing node
US7894438B2 (en) * 2007-06-07 2011-02-22 Ambriel Technologies Device and method for communicating with a legacy device, network or application
US7957399B2 (en) * 2008-12-19 2011-06-07 Microsoft Corporation Array-based routing of data packets
JP5387061B2 (ja) * 2009-03-05 2014-01-15 沖電気工業株式会社 情報変換装置、情報変換方法、情報変換プログラム及び中継装置
CN101600000A (zh) * 2009-06-26 2009-12-09 中国电信股份有限公司 IPv6用户访问IPv4站点的数据通信方法和***
US9054943B2 (en) * 2009-12-23 2015-06-09 Citrix Systems, Inc. Systems and methods for mixed mode handling of IPv6 and IPv4 traffic by a virtual server
CN101951411A (zh) * 2010-10-13 2011-01-19 戴元顺 云调度***及方法以及多级云调度***
CN102123087B (zh) * 2011-02-18 2014-01-08 天津博宇铭基信息科技有限公司 快速定标多级转发负载均衡方法及多级转发网络***
CN102075445B (zh) * 2011-02-28 2013-12-25 杭州华三通信技术有限公司 负载均衡方法及装置
CN202475471U (zh) * 2011-09-23 2012-10-03 百度在线网络技术(北京)有限公司 数据中心***及装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101834831A (zh) * 2009-03-13 2010-09-15 华为技术有限公司 一种实现nat设备冗余备份的方法、装置和***
CN102075921A (zh) * 2009-11-24 2011-05-25 ***通信集团公司 一种网络间通信的方法和装置
US20110153831A1 (en) * 2009-12-23 2011-06-23 Rishi Mutnuru Systems and methods for mixed mode of ipv6 and ipv4 dns of global server load balancing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2765747A4 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9497039B2 (en) 2009-05-28 2016-11-15 Microsoft Technology Licensing, Llc Agile data center network architecture
US9391716B2 (en) 2010-04-05 2016-07-12 Microsoft Technology Licensing, Llc Data center using wireless communication
US10110504B2 (en) 2010-04-05 2018-10-23 Microsoft Technology Licensing, Llc Computing units using directional wireless communication
CN104579729A (zh) * 2013-10-17 2015-04-29 华为技术有限公司 Cgn单板故障的通知方法及装置
CN104579729B (zh) * 2013-10-17 2019-03-01 华为技术有限公司 Cgn单板故障的通知方法及装置
US9954751B2 (en) 2015-05-29 2018-04-24 Microsoft Technology Licensing, Llc Measuring performance of a network using mirrored probe packets
CN105306553A (zh) * 2015-09-30 2016-02-03 北京奇艺世纪科技有限公司 访问请求调度方法及装置
CN105306553B (zh) * 2015-09-30 2018-08-07 北京奇艺世纪科技有限公司 访问请求调度方法及装置

Also Published As

Publication number Publication date
US20140258496A1 (en) 2014-09-11
US8966050B2 (en) 2015-02-24
EP2765747A1 (en) 2014-08-13
EP2765747A4 (en) 2015-06-10
CN103023797B (zh) 2016-06-15
EP2765747B1 (en) 2017-11-01
CN103023797A (zh) 2013-04-03

Similar Documents

Publication Publication Date Title
WO2013040942A1 (zh) 数据中心***及装置和提供服务的方法
US8259571B1 (en) Handling overlapping IP addresses in multi-tenant architecture
US9331979B2 (en) Facilitating content accessibility via different communication formats
CN102790808B (zh) 一种域名解析方法和***、一种客户端
WO2021073565A1 (zh) 业务服务提供方法及***
US9253148B2 (en) System and method for logging communications
JP2019526983A (ja) ブロードバンドリモートアクセスサーバの制御プレーン機能と転送プレーン機能の分離
US20120011230A1 (en) Utilizing a Gateway for the Assignment of Internet Protocol Addresses to Client Devices in a Shared Subset
US9699138B2 (en) Directing clients based on communication format
WO2012013133A1 (zh) 一种网络通信的方法和设备
TW201223206A (en) Multipath Transmission Control Protocol proxy
WO2009052668A1 (fr) Dispositif nat-pt et procédé de répartition de charge pour un dispositif nat-pt
JP5753172B2 (ja) ネットワークアドレス変換のための管理方法および管理デバイス
WO2009094928A1 (fr) Procédé et équipement de transmission d'un message basé sur le protocole de tunnel de niveau 2
JP5518202B2 (ja) エンドツーエンドコールの実現方法、エンドツーエンドコール端末及びシステム
WO2021073555A1 (zh) 业务服务提供方法及***、远端加速网关
US9654540B2 (en) Load balancing among network servers
AU2015264883A1 (en) Access control method and system, and access point
US20140313933A1 (en) Method, apparatus, and system for layer 2 interworking based on ipv6
US20140032782A1 (en) Method and apparatus for route selection of host in multihoming site
US20130089092A1 (en) Method for preventing address conflict, and access node
JP2013506358A5 (zh)
WO2011131097A1 (zh) 数据报文处理方法、***及接入服务节点
US9697173B2 (en) DNS proxy service for multi-core platforms
CN202475471U (zh) 数据中心***及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12833678

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 14346653

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2012833678

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2012833678

Country of ref document: EP