WO2020249128A1 - 一种服务路由方法及装置 - Google Patents

一种服务路由方法及装置 Download PDF

Info

Publication number
WO2020249128A1
WO2020249128A1 PCT/CN2020/096150 CN2020096150W WO2020249128A1 WO 2020249128 A1 WO2020249128 A1 WO 2020249128A1 CN 2020096150 W CN2020096150 W CN 2020096150W WO 2020249128 A1 WO2020249128 A1 WO 2020249128A1
Authority
WO
WIPO (PCT)
Prior art keywords
network
node
service
message
target
Prior art date
Application number
PCT/CN2020/096150
Other languages
English (en)
French (fr)
Inventor
顾叔衡
庄冠华
孙丰鑫
杨小敏
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2020249128A1 publication Critical patent/WO2020249128A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/32Flooding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context

Definitions

  • This application relates to the field of communication technology, and in particular to a service routing method and device.
  • edge computing nodes for example, an access computer room server or a converged computer room server close to the user
  • a commonly used service routing method based on edge computing is to use anycast IP (Internet Protocol Anycast, IP anycast) technology for addressing between different edge computing sites, that is, edge computing nodes that provide the same service expose the same
  • IP Internet Protocol
  • the network can find the edge computing node closest to the user equipment at the network layer according to the destination IP address corresponding to the service, and the edge computing node provides the user equipment service.
  • the load of edge computing nodes may be uneven. For example, in areas with a large number of users, the load of edge computing nodes is relatively large, which results in a decrease in the success rate of user equipment access. For areas with fewer users, the load of edge computing nodes is relatively small, resulting in low utilization of edge computing nodes.
  • This application provides a service routing method and device to solve the problem that the load of edge computing nodes may be unbalanced in the current service routing process.
  • the remote interference management method of the embodiment of the present application includes: the network ingress node receives the first message sent by the user equipment, and the first message is used to request the target service; the network ingress node sends to multiple network egress nodes respectively Query measurement message, which is used to query the computing performance and/or network performance of the network egress node for the target service; the network ingress node receives the response message sent by multiple network egress nodes, and the response message carries the network egress node For the computing performance information and/or network performance information of the target service; the network entry node determines the target network exit node among multiple network exit nodes based on the calculation performance information and/or network performance information of multiple network exit nodes, and the target network exit node Used to implement message routing between the user equipment and the service node, and the service node is used to provide target services for the user equipment.
  • the Ingress node can trigger to query the network performance such as the server load of the target service and the network performance such as the network delay when receiving the first message sent by the user equipment, thereby Ingress
  • the node can select a suitable Egress node by integrating the computing performance and network performance of the Egress node for the target service.
  • the network device finds the edge computing node closest to the user equipment at the network layer according to the destination IP to provide the user equipment. Service.
  • the Ingress node is used to integrate the computing performance and network performance of the Egress node for the target service to select the appropriate Egress node, so that the load balancing strategy at the application layer can be implemented, thereby reducing the access delay and improving the service The utilization rate of the node.
  • the query measurement message may be a network query measurement message (such as an OAM query measurement message), and the network query measurement message may include measurement and request for the computing performance of the target service, and/or, The network performance measurement and request for the target service; the response message is a network query measurement response message (such as an OAM response message).
  • the network query measurement message may include the measurement and request for the computing performance of the target service.
  • the network query measurement message may include the measurement and request for the network performance of the target service.
  • the network query measurement message may include the measurement and request for the computing performance of the target service, as well as the measurement and network performance of the target service. request.
  • the network entry node can trigger the network exit node to measure and report the computing performance for the target service and the network performance for the target service through the network query measurement message.
  • the network ingress node when the network ingress node determines the target network egress node among the multiple network egress nodes based on the computing performance information and/or network performance information of the multiple network egress nodes for the target service, it can target multiple network egress nodes. Any network exit node in the node, the network entry node determines the service cost of the network exit node based on the calculation performance information and/or network performance information of the network exit node for the target service; the network entry node selects the service cost among multiple network exit nodes The smallest node serves as the egress node of the target network.
  • the network ingress node determines the network egress node with the least service cost for service access by comprehensively considering the computing performance information and network performance information of each network egress node for the target service, which can realize the load balancing strategy at the application layer and pass
  • the above design can reduce the access delay and can improve the utilization of the service node.
  • the network ingress node can also copy the first message into multiple copies to send to multiple network egress nodes respectively.
  • the network ingress node distributes the first message to each network egress node, so that the network egress node can establish a transmission connection in advance, so that the network ingress node can timely transmit the report sent by the user equipment after determining the target network egress node.
  • the message is routed to the egress node of the target network, thereby reducing the message transmission delay.
  • the query measurement message can be sent to multiple network egress nodes as the path-associated message of the first message; Alternatively, the query measurement message can also be sent to multiple network egress nodes as the first packet accompanying message.
  • the network egress node can accurately receive the first message and query the measurement message.
  • the network ingress node may also send multiple Other network egress nodes in the network egress node except the target network egress node send a connection reset message (such as a TCP_RST message).
  • a connection reset message such as a TCP_RST message.
  • the network entry node when the network entry node receives the first message sent by the user equipment, it can be specifically: the network entry node receives the message sent by the user equipment; the network entry node determines that the message is the user equipment request based on the first list For the first packet of the target service, the first list does not include a routing and forwarding record in which the source network protocol IP address is the IP address of the user equipment and the destination IP address is the IP address corresponding to the target service.
  • the network entry node can determine whether the received message is the first message by querying whether it records the routing and forwarding record sent by the user equipment to the target service.
  • multiple network egress nodes may be selected from the M network egress nodes based on the second list, and the second list includes Service information of M network egress nodes.
  • the service information includes one or more of the following information: number of user connections, user connection capacity, local priority scheduling attributes, network delay service level requirements, load service level requirements, and M is an integer.
  • the network ingress node can select several network egress nodes with better access performance by combining the number of user connections and user connection capacity of each network egress node, so as to further reduce the access delay and improve The utilization of the service node.
  • the remote interference management method of the embodiment of the present application includes: the network egress node receives a query measurement message from the network entry node, and the query measurement message is used to query the network egress node’s computing performance and/or network for the target service Performance; the network egress node determines the computing performance and/or network performance for the target service; the network egress node sends a response message to the network ingress node, and the response message carries the computing performance information and/or network performance information of the network egress node for the target service .
  • the network egress node can report to the network ingress node its computing performance such as server load of the target service, network latency and other network performance when the network ingress node sends a query measurement message, so that the Ingress node can integrate the Egress node For the computing performance and network performance of the target service, a suitable Egress node is selected.
  • the network device searches for the edge computing node closest to the user equipment at the network layer according to the destination IP to provide services for the user equipment.
  • the Ingress node is used to integrate the computing performance and network performance of the Egress node for the target service to select an appropriate Egress node, which can implement a load balancing strategy at the application layer, thereby reducing the access delay and improving the utilization of the service node.
  • the query measurement message may be a network query measurement message (such as an OAM query measurement message), and the network query measurement message may include measurement and request for the computing performance of the target service, and/or, The network performance measurement and request for the target service; the response message is a network query measurement response message (such as an OAM response message).
  • the network query measurement message may include the measurement and request for the computing performance of the target service.
  • the network query measurement message may include the measurement and request for the network performance of the target service.
  • the network query measurement message may include the measurement and request for the computing performance of the target service, as well as the measurement and network performance of the target service. request.
  • the network entry node can trigger the network exit node to measure and report the computing performance for the target service and the network performance for the target service through the network query measurement message.
  • the network egress node receives the first message forwarded by the network ingress node, and the first message is the first message sent by the user equipment for requesting the target service.
  • the network ingress node distributes the first message to each network egress node, so that the network egress node can establish a transmission connection in advance, so that the network ingress node can timely transmit the report sent by the user equipment after determining the target network egress node.
  • the message is routed to the egress node of the target network, so that the message transmission delay can be reduced.
  • the network egress node may also send the first message to at least one service node, and the service node is a node that provides the target service.
  • the network egress node distributes the first message to each service node, so that the service node can establish a transmission connection in advance, so that the network egress node can route the message forwarded by the network ingress node to the service node in a timely manner. Reduce message transmission delay.
  • the network egress node may also receive a connection reset message sent by the network ingress node.
  • the connection reset message is used to release the transmission connection established by other network egress nodes, which can avoid the waste of network resources.
  • this application provides a service routing device, which can be a routing node, or a chip or chipset in a routing node, where the routing node can be a network entry node or a network exit node.
  • the device may include a processing unit and a transceiving unit.
  • the processing unit may be a processor, and the transceiver unit may be a communication interface;
  • the device may also include a storage unit, and the storage unit may be a memory; the storage unit is used to store instructions, and the processing unit The instructions stored in the storage unit are executed, so that the network entry node executes the corresponding function in the first aspect, or the network exit node executes the corresponding function in the second aspect.
  • the processing unit can be a processor, and the transceiver unit can be an input/output interface, a pin or a circuit, etc.; the processing unit executes instructions stored in the storage unit to
  • the network ingress node is caused to perform the corresponding function in the above-mentioned first aspect, or the network egress node is caused to perform the corresponding function in the above-mentioned second aspect.
  • the storage unit may be a storage unit (for example, register, cache, etc.) in the chip or chipset, or a storage unit (for example, read-only memory, random access memory, etc.) located outside the chip or chipset in the routing node. Memory, etc.).
  • the present application also provides a computer-readable storage medium, the computer-readable storage medium includes instructions, which when run on a computer, cause the computer to execute the methods described in the foregoing aspects.
  • the present application also provides a computer program product including instructions, which when executed, cause the methods described in the above aspects to be executed.
  • the present application provides a chip including a processor and a communication interface, and the communication interface is used to receive code instructions and transmit them to the processor.
  • the processor is configured to invoke the code instructions transmitted by the communication interface to execute the methods described in the foregoing aspects.
  • Fig. 1 is a schematic structural diagram of a communication system provided by this application.
  • FIG. 2 is a schematic flowchart of a service routing method provided by this application.
  • FIG. 3 is a schematic diagram of a service flooding provided by this application.
  • Figure 4 is a schematic diagram of a service routing process provided by this application.
  • FIG. 5 is a schematic structural diagram of a service routing device provided by this application.
  • Fig. 6 is a schematic structural diagram of another service routing device provided by this application.
  • edge computing nodes such as access computer room servers or converged computer room servers close to users
  • Network overhead and network latency Due to the short transmission path and low latency, the impact on business can be greatly reduced.
  • edge computing has the following specific problems: 1) There are a large number of edge computing nodes and the capacity is small. 2) Edge computing nodes cannot achieve efficient mutual assistance at the application level.
  • edge computing node load is not balanced, resulting in a decrease in the access success rate.
  • utilization of edge computing nodes is low.
  • the load balancing strategy at the application level leads to increased access delay and reduced efficiency.
  • a service routing method is to use anycast IP (Internet protocol anycast, IP anycast) technology for addressing between different edge computing nodes, that is, edge computing nodes that provide the same service expose the same virtual network
  • IP Internet protocol
  • IP Internet protocol
  • the network can find the device closest to the user at the network layer according to the destination IP address (virtual IP address) corresponding to the service
  • the edge computing node of the edge computing node provides services for user equipment.
  • the network cannot perceive the load of the edge computing node that provides services for the user equipment.
  • edge computing nodes cannot effectively divert service requests to lightly loaded nodes, resulting in low utilization of other edge computing nodes with smaller loads. For example, in areas with a large number of users, the load of edge computing nodes is relatively large, which results in a decrease in the success rate of user equipment access. For areas with fewer users, the load of edge computing nodes is relatively small, resulting in low utilization of edge computing nodes.
  • Another service routing method is to use hypertext transfer protocol (hypertext transfer protocol, HTTP) connection to query domain name address technology (ie HTTP DNS technology) to integrate the user's IP address and the load situation of edge computing nodes near the user, and select nearby
  • HTTP hypertext transfer protocol
  • HTTP DNS technology query domain name address technology
  • the most suitable edge computing node serves as the user's service node.
  • most edge computing nodes tend to be short-connected services, and at least three round trip times (RTT) are required to return the service address in the initial stage of HTTP DNS establishment, which greatly increases the service delay.
  • RTT round trip times
  • the embodiments of the present application provide a service routing method and device, which are used to solve the problems that the service routing in the prior art cannot take into account load balancing and the addressing efficiency is low.
  • the method and the device are based on the same inventive concept. Since the principles of the method and the device to solve the problem are similar, the implementation of the device and the method can be referred to each other, and the repetition will not be repeated.
  • the embodiments of the present application may be applied to a communication system, and the communication system may include a routing node and a service node, wherein one routing node can be connected to one or more service nodes.
  • the routing node can be used to provide service routing, for example, to route the service message sent by the user equipment to the service node, where the service routing node that the user equipment accesses first can be called an ingress node, and other service routing nodes can It is called the Egress node.
  • the Ingress node can be responsible for routing the message sent by the user to the Egress node, and the Egress node can be responsible for routing the message sent by the user to the service node.
  • the Egress node may also be the Ingress node itself.
  • Service nodes refer to nodes that provide services to users, and can also be called edge computing nodes.
  • FIG. 1 shows a schematic structural diagram of a communication system. It should be understood that FIG. 1 is only an exemplary illustration, and does not specifically limit the number of service routing nodes and service nodes included in the communication system.
  • the service routing method provided by the embodiment of the present application may be as shown in FIG. 2, and the method may be applied to the communication system shown in FIG. 1.
  • the method specifically includes:
  • the Ingress node receives the first message sent by the user equipment, where the first message is used to request a target service.
  • the service node can use IP ANYCAST technology, that is, the service node that provides the same service exposes the same virtual IP address to the network and binds the real IP address of the service node. Therefore, all service nodes that provide the target service correspond to the same Virtual IP address. It is understandable that if the target IP address of the first message can be the virtual IP address corresponding to the target service, it can be considered that the first message is used to request the target service.
  • Service A For ease of description, the following description takes the target service as Service A, and all nodes that provide the target service are called Service A nodes as an example.
  • the Ingress node can determine whether the message is the first message of the user equipment requesting Service A based on the first list.
  • the first list is used to record the routing and forwarding records of the Ingress node. .
  • the routing and forwarding record can support various combinations of one-tuple to five-tuple.
  • routing and forwarding records can include source IP address (SRC_IP), source port (SRC_PORT), target IP address (SERV_IP, here is the virtual IP address corresponding to the service), target port (SERV_PORT), protocol (PROTOCAL) ), in addition, the routing and forwarding record may also include the identity, age (age), maximum age (MaxAge), etc. of the egress node, where MaxAge is used to identify the maximum use time limit supported by the egress node, when the age of the egress node is greater than MaxAge , The routing records, forwarding records, and registry records of the egress node will be deleted. Exemplarily, the first list may be as shown in Table 1.
  • 192.168.1.13 is the real IP address of the service node that provides services corresponding to 10.10.10.1 under the Ingress node, assuming 10.107.21.12 is the IP address of UE1, 10.10.10.1 is the virtual IP address corresponding to Service B, and 192.168.1.13 is For the IP address of service node 1, the third routing and forwarding record in Table 1 can be understood as: service node 1 connected to the Ingress node provides Service B service for UE1.
  • the routing and forwarding records can include the source IP address and the destination IP address.
  • the routing and forwarding records can also include the identifier of the egress node, age, MaxAge, etc., for example, the first list can be as shown in Table 2. Shown.
  • 192.168.1.13 is the real IP address of the service node that provides services corresponding to 10.10.10.1 under the Ingress node, assuming 10.107.21.12 is the IP address of UE1, 10.10.10.1 is the virtual IP address corresponding to Service B, and 192.168.1.13 is For the IP address of service node 1, the third routing and forwarding record in Table 1 can be understood as: service node 1 connected to the Ingress node provides Service B service for UE1.
  • the Ingress node After the Ingress node receives the message sent by the user equipment (the destination IP address of the message is the virtual IP address corresponding to Service A), it can determine whether the first list includes the routing and forwarding records sent by the user equipment to Service A. Whether the message is the first message of the user equipment requesting Service A, specifically, if the first list includes the route forwarding record sent by the user equipment to the Service A node, that is, a certain route included in the first list In the forwarding record, the source IP address is the IP address of the user equipment, and the target IP address is the virtual IP address corresponding to the Service A. It can be determined that the message is not the first message that the user equipment requests Service A. Otherwise, you can Determine that this message is the first message of the user equipment requesting Service A.
  • the Ingress node sends a query measurement message to multiple Egress nodes, respectively.
  • the query measurement message is used to query the computing performance and/or network performance of the Egress node for Service A, that is, the query measurement message is used to query the Egress node for Service A.
  • the query measurement message is used to query the computing performance of the Egress node for Service A, or the query measurement message is used to query the computing performance and network performance of the Egress node for Service A.
  • computing performance can be, but not limited to, server load, service time, etc.
  • network performance can be, but not limited to, network delay, network congestion, etc.
  • the query measurement message may be a network query measurement message (such as an operation management and maintenance (operations administration and maintenance, OAM) query measurement message), where the network query measurement message may include the calculation performance for Service A The measurement and request of the service, and/or the measurement and request of the network performance of Service A.
  • the response message may be a network query measurement response message (such as an OAM response message).
  • the network query measurement message may include the measurement and request for the computing performance of Service A.
  • the network query measurement message may include the measurement and request for the network performance of Service A.
  • the network query measurement message can include the measurement and request for the calculation performance of Service A, and the measurement and request for the network performance of Service A .
  • the multiple network exits may be randomly selected by the Ingress node, or selected based on the second list, where the second list includes service information of M Egress nodes, and the service information includes one or more of the following information: Service address (Service IP, that is, the virtual IP address corresponding to a service provided by the Egress node), the number of user connections (Connections), user connection capacity (Capacity), local priority scheduling attributes (LocalPref), network latency (Latency), Network delay service level requirement (LatencySLA), load service level requirement (LoadSLA), age (Age), M is an integer.
  • the service information may include the cost (Cost) corresponding to the egress node.
  • the second list may be as shown in Table 3.
  • the Ingress node can select multiple Egress nodes from the M Egress nodes according to the service information of the M Egress nodes recorded in the second list, for example, can select multiple Egress nodes with a lower cost among the M Egress nodes A node, for example, can select multiple Egress nodes with a small Cost and few Connections among the M Egress nodes, and so on.
  • the Ingress node can use other selection methods to combine the service information of each Egress node to select multiple Egress nodes from the M Egress nodes, which will not be listed here.
  • the Egress node may update the local service registry of the Egress node based on the registration information of the service node, where the local service registry may be the registration information of each service node
  • the registration information of the service node may include the Service IP of the service node, the host IP (HostIP) of the service node, the port of the service node (Port), the Capacity of the service node, the Connections of the service node, the Latency of the service node, and the Age of the service node. , MaxAge of the service node, etc., for example, the local service registry may be as shown in Table 4, and the second list of the Egress node records is updated based on the updated local service registry.
  • the egress node can diffuse the service information (service flooding) of the egress node outwards through network protocols (such as open shortest path first (OSPF), border gateway protocol (border gateway protocol, BGP), etc.), That is, the service information of the egress node is sent to other egress nodes.
  • network protocols such as open shortest path first (OSPF), border gateway protocol (border gateway protocol, BGP), etc.
  • OSPF open shortest path first
  • BGP border gateway protocol
  • the service information of the egress node is sent to other egress nodes.
  • other Egress nodes update the second list of their records based on the service information of the Egress node, and will spread out to receive the updated service information of the Egress node. Exemplarily, as shown in Figure 3.
  • the Egress node can be responsible for real-time service load synchronization with the Service A node (that is, the Egress node needs to be responsible for converging the capacity and load of all Service A nodes under the node), and perform network RTT measurement from the Egress node to each Service A node.
  • the Ingress node can also copy the first message into multiple copies to send to multiple Egress nodes respectively.
  • the query measurement message can be sent with the packet.
  • the query measurement message can be embedded in the header of the first message for transmission.
  • the Ingress node when the Ingress node sends the first message to the Egress node, it can also send the query measurement message along the route, that is, the query measurement message is sent as the route accompanying message of the first message, that is, the query is sent on the same link The measurement message and the first message. Specifically, the Ingress node may send the first message first and then send the query measurement message, or it may send the query measurement message first and then send the first message.
  • the Ingress node receives response messages respectively sent by multiple Egress nodes, where the response messages carry computing performance information and/or network performance information of the Egress node.
  • the Egress node when the Egress node receives the query measurement message, it prints a time stamp and returns the Connections and Capacity of all Service A nodes under the Egress node to the Ingress node.
  • the Ingress node determines a target Egress node among the multiple Egress nodes based on the computing performance information and/or network performance information of the multiple Egress nodes, and the target network egress node is used to implement packet routing between the user equipment and the service node.
  • the service node is used to provide target services for user equipment.
  • the Ingress node can determine the service cost of the Egress node based on the computing performance and/or network performance of the Egress node.
  • the Ingress node selects the service cost of the Egress node with the smallest service cost.
  • the node serves as the target Egress node.
  • the Ingress node can calculate the service cost of each Egress node, and select the Egress node with the lowest service cost as the target Egress node.
  • other candidate Egress nodes that is, other Egress nodes other than the target Egress node among the multiple Egress nodes
  • the Ingress node can send a connection reset TCP_RST message to other candidate Egress nodes after determining the target Egress node.
  • Other candidate egress nodes need to forward and clear the flow table after receiving the TCP_RST message corresponding to Service A.
  • the Ingress node may not consider the impact of network performance (such as network delay, network congestion, etc.), so that the Egress node with the best computing performance (such as the lightest load) can be selected as the target Egress node.
  • network performance such as network delay, network congestion, etc.
  • the Ingress node may not consider the impact of computing performance (such as server load, etc.), so that the Egress node with the best network performance (such as the smallest network delay) can be selected as the target Egress node.
  • computing performance and network performance can also be considered comprehensively, and the service cost can be obtained through fusion calculation, so that the Egress node with the smallest service cost can be selected as the target Egress node.
  • the Ingress node can combine the following parameters when determining the service cost of Egress node through comprehensive computing performance and network performance: latency weight (LatencyWeight), load weight (LoadWeight), LatencySLA, LoadSLA.
  • the function of the Ingress node combining LatencyWeight, LoadWeight, LatencySLA, and LoadSLA to calculate the service cost (Cost) of the Egress node may specifically be:
  • the target Egress node needs to calculate the cost of each Service A node locally and select the Service A node with the lowest cost for access.
  • the target egress node may also add the routing and forwarding record of the user equipment to the Service A node in the first list of records.
  • the service routing process includes:
  • Step S401 The Ingress node receives a message sent by the user equipment.
  • the source IP address of the message is the IP address of the user equipment
  • the target IP address is the virtual IP address corresponding to Service A.
  • step S402 The Ingress node determines whether a route forwarding record from the IP address of the user equipment to the virtual IP address corresponding to Service A is included in the first list. If yes, execute step S403. If not, step S404 is executed.
  • S403 The Ingress node forwards the message according to the routing and forwarding record from the IP address of the user equipment in the first list to the virtual IP address corresponding to Service A.
  • Step S404 The Ingress node sends the message and the OAM query measurement message to multiple Egress nodes according to the second list, where the OAM query measurement message carries the measurement and request for the calculation performance of Service A and the network performance for Service A Measurement and request.
  • the routing and forwarding record from the IP address of the user equipment to the virtual IP address corresponding to Service A is added to the first list.
  • the message and the OAM query measurement message can be sent along the route or along with the packet.
  • the second list may be created and updated based on the process shown in FIG. 3.
  • Step S405 The Ingress node receives OAM response messages sent by the multiple Egress nodes. Among them, the OAM response message carries the computing performance information and network performance information of the Egress node. Step S406 is executed.
  • the Ingress node selects the best Egress node. Specifically, the Ingress node may determine the cost value of each Egress node according to the computing performance information and network performance information of each Egress node, and select the Egress node with the smallest cost value as the best Egress node (ie, target Egress node).
  • the embodiment of the application provides a service routing device.
  • the structure of the service routing device may be as shown in FIG. 5, including a processing unit 501, a first transceiver unit 502, and a second transceiver unit 503. .
  • the device is specifically used to implement the functions of the network entry node in the embodiments of Figures 2 to 4.
  • the device can be the network entry node itself, or the chip or chipset in the network entry node or the chip used to perform related method functions a part of.
  • the first transceiver unit 502 is configured to receive the first message sent by the user equipment, and the first message is used to request the target service.
  • the second transceiving unit 503 is configured to send query measurement messages to multiple network egress nodes, respectively, the query measurement messages are used to query the computing performance and/or network performance of the network egress node for the target service, and to receive multiple network exits
  • a response message sent by the node respectively, the response message carries the computing performance information and/or network performance information of the network egress node for the target service
  • the processing unit 501 is configured to calculate performance information and/or network performance based on multiple network egress nodes The information determines a target network egress node among multiple network egress nodes, the target network egress node is used to implement message routing between the user equipment and the service node, and the service node is used to provide the user equipment with the target service.
  • the query measurement message may be a network query measurement message.
  • the network query measurement message contains the measurement and request for the computing performance of the target service, and/or the measurement and request for the network performance of the target service;
  • the message can be a network query measurement response message.
  • the processing unit may be specifically configured to: for any network egress node of the multiple network egress nodes, determine the network egress node's computing performance information and/or network performance information of the network egress node for the target service Service cost: The node with the smallest service cost among multiple network egress nodes is selected as the target network egress node.
  • the second transceiving unit 503 may also be used to copy the first message into multiple copies to send to multiple network egress nodes respectively.
  • the second transceiver unit 503, when sending query measurement messages to multiple network egress nodes, may be specifically used to: send the query measurement message as the first message to the multiple network egress nodes. Send it; or, send the query measurement message as the first packet accompanying message to multiple network egress nodes.
  • the second transceiver unit 503 after determining the target network egress node among the multiple network egress nodes based on the calculation performance information and/or network performance information of the multiple network egress nodes, may also be used to: In addition to the target network egress node, other network egress nodes send connection reset messages.
  • the first transceiving unit 502 may be specifically configured to: receive a message sent by the user equipment.
  • the processing unit 501 is further configured to determine, based on the first list, that the message is the first message for the user equipment to request the target service.
  • the first list does not include the source IP address of the user equipment and the destination IP address corresponding to the target service. The routing and forwarding record of the IP address.
  • the processing unit 501 before sending query measurement messages to multiple network egress nodes, may also be used to: select multiple network egress nodes from the M network egress nodes based on the second list, the second list including M networks Service information of the egress node.
  • the service information includes one or more of the following information: the number of user connections, user connection capacity, local priority scheduling attributes, network delay service level requirements, load service level requirements, and M is an integer.
  • the division of modules in the embodiments of the present application is illustrative, and is only a logical function division. In actual implementation, there may be other division methods.
  • the functional modules in the various embodiments of the present application may be integrated into one process. In the device, it can also exist alone physically, or two or more modules can be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules.
  • the service routing apparatus can be as shown in FIG. 6, and the processing unit 501 can be a processor 602.
  • the processor 602 may be a central processing unit (CPU), or a digital processing module, and so on.
  • the first transceiver unit 502 can be a first communication interface 601a
  • the second transceiver unit 502 can be a second communication interface 601b.
  • the first communication interface 601a and the second communication interface 601b can be transceivers or interface circuits such as transceiver circuits. It can also be a transceiver chip and so on.
  • the service routing device further includes: a memory 603 for storing programs executed by the processor 601.
  • the memory 603 may be a non-volatile memory, such as a hard disk drive (HDD) or a solid-state drive (SSD), etc., and may also be a volatile memory, such as random access memory (random access memory). -access memory, RAM).
  • the memory 603 is any other medium that can be used to carry or store desired program codes in the form of instructions or data structures and that can be accessed by a computer, but is not limited thereto.
  • the processor 602 is configured to execute the program code stored in the memory 603, and is specifically configured to execute the actions of the above-mentioned processing unit 501, which will not be repeated in this application.
  • the embodiment of the present application does not limit the specific connection medium among the first communication interface 601a, the second communication interface 601b, the processor 602, and the memory 603.
  • the memory 603, the processor 602, the first communication interface 601a, and the second communication interface 601b are connected by a bus 604.
  • the bus is represented by a thick line in FIG. 6, and the connections between other components are The method is only for schematic illustration and is not meant to be limiting.
  • the bus can be divided into address bus, data bus, control bus, etc. For ease of representation, only one thick line is used in FIG. 6, but it does not mean that there is only one bus or one type of bus.
  • the embodiments of the present application can be provided as methods, systems, or computer program products. Therefore, the present application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, this application may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
  • a computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing functions specified in a flow or multiple flows in the flowchart and/or a block or multiple blocks in the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

一种服务路由方法及装置,用以解决在服务路由过程中边缘计算节点负载不均衡的问题。该方法包括:网络入口节点接收用户设备发送的首个报文,首个报文用于请求目标服务;网络入口节点向多个网络出口节点分别发送查询测量报文,查询测量报文用于查询网络出口节点对于目标服务的计算性能和/或网络性能;网络入口节点接收多个网络出口节点分别发送的应答报文,应答报文携带网络出口节点对于目标服务的计算性能信息和/或网络性能信息;网络入口节点基于多个网络出口节点的计算性能信息和/或网络性能信息在多个网络出口节点中确定目标网络出口节点。

Description

一种服务路由方法及装置
相关申请的交叉引用
本申请要求在2019年06月14日提交中国专利局、申请号为201910517956.1、申请名称为“一种服务路由方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信技术领域,尤其涉及一种服务路由方法及装置。
背景技术
随着5G,智慧城市,物联网等的兴起,海量数据的分析与储存对网络带宽提出了巨大的挑战,由于许多数据流由末端设备(如用户设备)生成,而通过云计算处理和分析,不可能做出实时决策,从而导致网络时延较为严重。因此,相比于云计算,利用边缘计算节点(例如靠近用户的接入机房服务器或者汇聚机房服务器等)为用户设备提供服务,可以减少网络等待时间。
目前,一种常用的基于边缘计算的服务路由方法为将任播IP(internet protocol anycast,IP anycast)技术用于在不同边缘计算站点间的寻址,即提供相同服务的边缘计算节点暴露相同的网络协议(internet protocol,IP)地址给网络,当用户发起服务请求时,网络可以根据服务对应的目的IP地址在网络层寻找最靠近用户设备的边缘计算节点,由该边缘计算节点为用户设备提供服务。
但是,由于用户分布不均匀,边缘计算节点负载可能不均衡。例如,对于用户较多的地区,边缘计算节点的负载较大,从而导致用户设备接入的成功率降低。而对于用户较少的地区,边缘计算节点的负载较小,从而导致边缘计算节点的利用率偏低。
发明内容
本申请提供了一种服务路由方法及装置,用以解决在目前服务路由过程中边缘计算节点负载可能不均衡的问题。
第一方面,本申请实施例的远程干扰管理方法,包括:网络入口节点接收用户设备发送的首个报文,首个报文用于请求目标服务;网络入口节点向多个网络出口节点分别发送查询测量报文,查询测量报文用于查询网络出口节点对于目标服务的计算性能和/或网络性能;网络入口节点接收多个网络出口节点分别发送的应答报文,应答报文携带网络出口节点对于目标服务的计算性能信息和/或网络性能信息;网络入口节点基于多个网络出口节点的计算性能信息和/或网络性能信息在多个网络出口节点中确定目标网络出口节点,目标网络出口节点用于实现用户设备与服务节点之间的报文路由,服务节点用于为用户设备提供目标服务。
本申请实施例中网络入口(Ingress)节点在接收到用户设备发送的首报文时可以触发查询网络出口(Egress)节点对于目标服务的服务器负载等计算性能、网络时延等网络性能,从而Ingress节点可以综合Egress节点对于目标服务的计算性能、网络性能选择一个合适的Egress节点,相比于现有技术中网络装置根据目的地IP在网络层寻找最靠近用户设备的边缘计算节点为用户设备提供服务,本申请实施例中通过Ingress节点综合Egress节点对于目标服务的计算性能、网络性能选择合适的Egress节点,从而可以实现应用层的负载均衡策略,进而可以降低接入时延,并且可以提高服务节点的利用率。
在一种可能的设计中,查询测量报文可以为网络查询测量报文(如OAM查询测量报文),网络查询测量报文可以包含针对目标服务的计算性能的测量与请求,和/或,针对目标服务的网络性能的测量与请求;应答报文为网络查询测量应答报文(如OAM应答报文)。具体的,若查询测量报文用于查询网络出口节点对于目标服务的计算性能,则网络查询测量报文可以包括针对目标服务的计算性能的测量与请求。若查询测量报文用于查询网络出口节点对于目标服务的网络性能,则网络查询测量报文可以包括针对目标服务的网络性能的测量与请求。若查询测量报文用于查询网络出口节点对于目标服务的计算性能以及网络性能,则网络查询测量报文可以包括针对目标服务的计算性能的测量与请求,以及针对目标服务的网络性能的测量与请求。通过上述设计中,网络入口节点可以通过网络查询测量报文触发网络出口节点测量并上报对于目标服务的计算性能、对于目标服务的网络性能。
在一种可能的设计中,网络入口节点基于多个网络出口节点对于目标服务的计算性能信息和/或网络性能信息在多个网络出口节点中确定目标网络出口节点时,可以针对多个网络出口节点中的任一网络出口节点,网络入口节点基于网络出口节点对于目标服务的计算性能信息和/或网络性能信息确定网络出口节点的服务代价;网络入口节点在多个网络出口节点中选择服务代价最小的节点作为目标网络出口节点。上述设计中,网络入口节点通过综合考虑各个网络出口节点的对于目标服务的计算性能信息、网络性能信息确定服务代价最小的网络出口节点进行服务接入,可以实现应用层的负载均衡策略,并且通过上述设计可以降低接入时延,并且可以提高服务节点的利用率。
在一种可能的设计中,网络入口节点还可以将首个报文复制成多份向多个网络出口节点分别进行发送。上述设计中,网络入口节点通过将首个报文分发给各个网络出口节点,使得网络出口节点可以提前建立传输连接,使得网络入口节点在确定目标网络出口节点后可以及时的将用户设备发送的报文路由到该目标网络出口节点,从而可以降低报文传输时延。
在一种可能的设计中,网络入口节点向多个网络出口节点分别发送查询测量报文时,可以将查询测量报文作为首个报文的随路报文向多个网络出口节点进行发送;或者,也可以将查询测量报文作为首个报文的随包报文向多个网络出口节点进行发送。通过上述设计,使得网络出口节点可以准确的接收首个报文以及查询测量报文。
在一种可能的设计中,在网络入口节点基于多个网络出口节点的计算性能信息和/或网络性能信息在多个网络出口节点中确定目标网络出口节点之后,网络入口节点还可以向多个网络出口节点中除目标网络出口节点以外的其他网络出口节点发送连接复位报文(如TCP_RST报文)。上述设计中,通过连接复位报文释放其他网络出口节点建立的传输连接,可以避免网络资源的浪费。
在一种可能的设计中,网络入口节点接收用户设备发送的首个报文时,具体可以:网络入口节点接收用户设备发送的报文;网络入口节点基于第一列表确定报文为用户设备请求目标服务的首个报文,第一列表未包括源网络协议IP地址为用户设备的IP地址、且目的IP地址为目标服务对应的IP地址的路由转发记录。上述设计中,网络入口节点可以通过查询自身是否记录了该用户设备发往目标服务的路由转发记录来确定接收的报文是否为首报文。
在一种可能的设计中,在网络入口节点向多个网络出口节点分别发送查询测量报文之前,可以基于第二列表在M个网络出口节点中选择出多个网络出口节点,第二列表包括M个网络出口节点的服务信息,服务信息包括如下信息中的一个或多个:用户连接数、用户连接容量、本地优先调度属性、网络延迟服务等级要求、负载服务等级要求,M为整数。上述设计中,网络入口节点可以通过结合各个网络出口节点的用户连接数、用户连接容量等参数,优先选择几个接入性能比较好的网络出口节点,从而可以进一步降低接入时延,以及提高服务节点的利用率。
第二方面,本申请实施例的远程干扰管理方法,包括:网络出口节点接收来自网络入口节点的查询测量报文,查询测量报文用于查询网络出口节点对于目标服务的计算性能和/或网络性能;网络出口节点确定对于目标服务的计算性能和/或网络性能;网络出口节点向网络入口节点发送应答报文,应答报文携带网络出口节点对于目标服务的计算性能信息和/或网络性能信息。
本申请实施例中网络出口节点可以在网络入口节点发送查询测量报文时向网络入口节点上报自身对于目标服务的服务器负载等计算性能、网络时延等网络性能,从而使得Ingress节点可以综合Egress节点对于目标服务的计算性能、网络性能选择一个合适的Egress节点,相比于现有技术中网络装置根据目的地IP在网络层寻找最靠近用户设备的边缘计算节点为用户设备提供服务,本申请实施例中通过Ingress节点综合Egress节点对于目标服务的计算性能、网络性能选择合适的Egress节点,从而可以实现应用层的负载均衡策略,进而可以降低接入时延,并且可以提高服务节点的利用率。
在一种可能的设计中,查询测量报文可以为网络查询测量报文(如OAM查询测量报文),网络查询测量报文可以包含针对目标服务的计算性能的测量与请求,和/或,针对目标服务的网络性能的测量与请求;应答报文为网络查询测量应答报文(如OAM应答报文)。具体的,若查询测量报文用于查询网络出口节点对于目标服务的计算性能,则网络查询测量报文可以包括针对目标服务的计算性能的测量与请求。若查询测量报文用于查询网络出口节点对于目标服务的网络性能,则网络查询测量报文可以包括针对目标服务的网络性能的测量与请求。若查询测量报文用于查询网络出口节点对于目标服务的计算性能以及网络性能,则网络查询测量报文可以包括针对目标服务的计算性能的测量与请求,以及针对目标服务的网络性能的测量与请求。通过上述设计中,网络入口节点可以通过网络查询测量报文触发网络出口节点测量并上报对于目标服务的计算性能、对于目标服务的网络性能。
在一种可能的设计中,网络出口节点接收网络入口节点转发的首报文,首报文为用户设备发送的、用于请求目标服务的首个报文。上述设计中,网络入口节点通过将首个报文分发给各个网络出口节点,使得网络出口节点可以提前建立传输连接,使得网络入口节点在确定目标网络出口节点后可以及时的将用户设备发送的报文路由到该目标网络出口节点, 从而可以降低报文传输时延。
在一种可能的设计中,网络出口节点还可以向至少一个服务节点发送首报文,服务节点为提供目标服务的节点。上述设计中,网络出口节点通过将首个报文分发给各个服务节点,使得服务节点可以提前建立传输连接,使得网络出口节点可以将网络入口节点转发的报文及时的路由到服务节点,从而可以降低报文传输时延。
在一种可能的设计中,网络出口节点还可以接收网络入口节点发送的连接复位报文。上述设计中,通过连接复位报文释放其他网络出口节点建立的传输连接,可以避免网络资源的浪费。
第三方面,本申请提供一种服务路由装置,该装置可以是路由节点,也可以是路由节点内的芯片或芯片组,其中,路由节点可以是网络入口节点或者网络出口节点。该装置可以包括处理单元和收发单元。当该装置是路由节点时,该处理单元可以是处理器,该收发单元可以是通信接口;该装置还可以包括存储单元,该存储单元可以是存储器;该存储单元用于存储指令,该处理单元执行该存储单元所存储的指令,以使网络入口节点执行上述第一方面中相应的功能、或者使网络出口节点执行上述第二方面中相应的功能。当该装置是路由节点内的芯片或芯片组时,该处理单元可以是处理器,该收发单元可以是输入/输出接口、管脚或电路等;该处理单元执行存储单元所存储的指令,以使网络入口节点执行上述第一方面中相应的功能、或者使网络出口节点执行上述第二方面中相应的功能。该存储单元可以是该芯片或芯片组内的存储单元(例如,寄存器、缓存等),也可以是路由节点内的位于该芯片或芯片组外部的存储单元(例如,只读存储器、随机存取存储器等)。
第四方面,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质中包括指令,当其在计算机上运行时,使得计算机执行上述各方面所述的方法。
第五方面,本申请还提供一种包括指令的计算机程序产品,当其被运行时,使得上述各方面所述的方法被执行。
第六方面,本申请提供了一种芯片,该芯片包括处理器和通信接口,所述通信接口用于接收代码指令,并传输到处理器。所述处理器,用于调用所述通信接口传输的代码指令以执行上述各方面所述的方法。
附图说明
图1为本申请提供的一种通信***的结构示意图;
图2为本申请提供的一种服务路由方法的流程示意图;
图3为本申请提供的一种服务泛洪的示意图;
图4为本申请提供的一种服务路由过程示意图;
图5为本申请提供的一种服务路由装置的结构示意图;
图6为本申请提供的另一种服务路由装置的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。
随着5G,智慧城市,物联网等的兴起,海量数据的分析与储存对网络带宽提出了巨大 的挑战,由于许多数据流由末端设备(如用户设备)生成,而通过云计算处理和分析,不可能做出实时决策,从而导致网络时延较为严重。因此,相比于云计算,利用边缘计算节点(例如靠近用户的接入机房服务器或者汇聚机房服务器等)为用户设备提供服务,由于传输路径短,时延低,可以极大的降低对业务对网络的开销以及网络等待时间。
但相对于云计算,边缘计算有如下具体问题:1)边缘计算节点数量众多,容量较小。2)边缘计算节点无法在应用层面做到高效互助。
上述问题对于边缘计算节点产生的具体影响体现如下:1)边缘计算节点负载不均衡,导致接入成功率降低。2)边缘计算节点利用率偏低。3)在应用层面的负载均衡策略导致接入时延增大,效率降低。
目前,基于边缘计算,一种服务路由方法为将任播IP(internet protocol anycast,IP anycast)技术用于在不同边缘计算节点间的寻址,即提供相同服务的边缘计算节点暴露相同的虚拟网络协议(internet protocol,IP)地址给网络并绑定边缘计算节点的真实IP地址,当用户发起服务请求时,网络可以根据服务对应的目的IP地址(虚拟IP地址)在网络层寻找最靠近用户设备的边缘计算节点,由该边缘计算节点为用户设备提供服务。但是,通过该方法,网络无法感知为用户设备提供服务的边缘计算节点的负载,若该节点过载,靠近过载节点的用户请求仍将被路由至此过载节点,导致用户设备服务接入失败或降低用户的服务体验。同时过载节点不能有效分流服务请求至轻载节点导致其他负载较小的边缘计算节点利用率低下。例如,对于用户较多的地区,边缘计算节点的负载较大,从而导致用户设备接入的成功率降低。而对于用户较少的地区,边缘计算节点的负载较小,从而导致边缘计算节点的利用率偏低。
还有一种服务路由方法为采用利用超文本传输协议(hyper text transfer protocol,HTTP)连接查询域名地址的技术(即HTTP DNS技术)综合用户IP地址以及用户附近的边缘计算节点的负载情况,选择附近最合适的边缘计算节点作为用户的服务节点。但是在该方法中,边缘计算节点多数偏向短连接的业务,而HTTP DNS建立初期至少需要3个往返时延(round trip time,RTT)才能返回服务地址,因此极大的增加服务时延,对边缘计算而言代价很高。
基于此,本申请实施例提供一种服务路由方法及装置,用于解决现有技术中服务路由无法兼顾负载均衡,以及寻址效率低下的问题。其中,方法和装置是基于同一发明构思的,由于方法及装置解决问题的原理相似,因此装置与方法的实施可以相互参见,重复之处不再赘述。
本申请实施例可以应用于通信***中,该通信***可以包括路由节点以及服务节点,其中,一个路由节点可以连接一个或多个服务节点。路由节点可以用于提供服务路由,例如将用户设备发送的业务报文路由到服务节点,其中,用户设备最先接入的服务路由节点可以称为网络入口(Ingress)节点,其他服务路由节点可以称为网络出口(Egress)节点。Ingress节点可以负责将用户发送的报文路由到Egress节点,Egress节点可以负责将用户发送的报文路由到服务节点。具体实施中,Egress节点也可以是Ingress节点本身。服务节点指为用户提供服务的节点,也可以称为边缘计算节点等。示例性的,图1示出一种通信***结构示意图,应理解,图1仅是一种示例性说明,并不对通信***中所包括服务路由节点、服务节点的数量进行具体限定。
下面结合附图对本申请实施例进行详细说明。
本申请实施例提供的服务路由方法可以如图2所示,该方法可以应用为图1所示的通信***,该方法具体包括:
S201,Ingress节点接收用户设备发送的首个报文,该首个报文用于请求目标服务。其中,在网络层,服务节点可以采用IP ANYCAST技术,即提供相同服务的服务节点暴露相同的虚拟IP地址给网络并且绑定服务节点的真实IP地址,因此,所有提供目标服务的服务节点对应相同的虚拟IP地址。可以理解的,若该首个报文的目标IP地址可以为目标服务对应的虚拟IP地址,则可以认为该首个报文用于请求目标服务。
为了方便描述,下面以目标服务为Service A,所有提供目标服务的节点均称为Service A节点为例进行描述。
具体实施中,Ingress节点接收到用户设备的报文后,可以基于第一列表确定该报文是否为用户设备请求Service A的首个报文,第一列表用于记录该Ingress节点的路由转发记录。其中,路由转发记录可以支持一元组到五元组的各种组合。以五元组为例,路由转发记录可以包括源IP地址(SRC_IP)、源端口(SRC_PORT)、目标IP地址(SERV_IP,这里为服务对应的虚拟IP地址)、目标端口(SERV_PORT)、协议(PROTOCAL),此外,路由转发记录中还可以包括egress节点的标识、年龄(age)、最大年龄(MaxAge)等,其中,MaxAge用于标识egress节点支持的最大使用时限,当egress节点的age大于MaxAge时,该egress节点的路由记录、转发记录、注册表记录会被删除。示例性的,第一列表可以如表1所示。
表1
Figure PCTCN2020096150-appb-000001
其中,192.168.1.13为Ingress节点下提供10.10.10.1对应服务的服务节点的真实IP地址,假设10.107.21.12为UE1的IP地址,10.10.10.1为Service B服务对应的虚拟IP地址,192.168.1.13为服务节点1的IP地址,则表1中第三条路由转发记录可以理解为:Ingress节点连接的服务节点1为UE1提供Service B服务。
以二元组为例,路由转发记录可以包括源IP地址、目标IP地址,此外,路由转发记录中还可以包括egress节点的标识、age、MaxAge等,示例性的,第一列表可以如表2所示。
表2
SRC_IP SERV_IP egress age MaxAge
10.197.29.1 10.10.10.2 R1 2s 10s
10.137.28.2 10.10.10.2 R2 5s 10s
10.107.21.12 10.10.10.1 192.168.1.13 1s 10s
…… …… …… …… ……
其中,192.168.1.13为Ingress节点下提供10.10.10.1对应服务的服务节点的真实IP地址,假设10.107.21.12为UE1的IP地址,10.10.10.1为Service B服务对应的虚拟IP地址, 192.168.1.13为服务节点1的IP地址,则表1中第三条路由转发记录可以理解为:Ingress节点连接的服务节点1为UE1提供Service B服务。
Ingress节点接收到用户设备发送的报文(该报文的目的IP地址为Service A对应的虚拟IP地址)后,可以通过查询第一列表中是否包括用户设备发往Service A的路由转发记录来确定该报文是否为该用户设备请求Service A的首个报文,具体来说,若第一列表中包括该用户设备发往该Service A节点的路由转发记录,即第一列表包括的某个路由转发记录中源IP地址为该用户设备的IP地址,目标IP地址为该Service A对应的虚拟IP地址,则可以确定该报文不是该用户设备请求Service A的首个报文,反之,则可以确定该报文为该用户设备请求Service A的首个报文。
S202,Ingress节点向多个Egress节点分别发送查询测量报文,查询测量报文用于查询Egress节点对于Service A的计算性能和/或网络性能,即查询测量报文用于查询Egress节点对于Service A的计算性能,或者,查询测量报文用于查询Egress节点对于Service A的计算性能,或者,查询测量报文用于查询Egress节点对于Service A的计算性能和网络性能。其中,计算性能可以但不限于为服务器负载,服务时间等,网络性能可以但不限于为网络延迟,网络拥塞等。
示例性的,查询测量报文可以为网络查询测量报文(如运营管理与维护(operations administration and maintenance,OAM)查询测量报文),其中,网络查询测量报文可以包括针对Service A的计算性能的测量与请求,和/或,针对Service A的网络性能的测量与请求。应答报文可以为网络查询测量应答报文(如OAM应答报文)。具体的,若查询测量报文用于查询Egress节点对于Service A的计算性能,则网络查询测量报文可以包括针对Service A的计算性能的测量与请求。若查询测量报文用于查询Egress节点对于Service A的网络性能,则网络查询测量报文可以包括针对Service A的网络性能的测量与请求。若查询测量报文用于查询Egress节点对于Service A的计算性能以及网络性能,则网络查询测量报文可以包括针对Service A的计算性能的测量与请求,以及针对Service A的网络性能的测量与请求。
该多个网络出口可以是Ingress节点随机选择出来的,也可以基于第二列表选择出来的,其中,第二列表包括M个Egress节点的服务信息,服务信息包括如下信息中的一个或多个:服务地址(Service IP,即Egress节点所提供的某个服务对应的虚拟IP地址)、用户连接数(Connections)、用户连接容量(Capacity)、本地优先调度属性(LocalPref)、网络延迟(Latency)、网络延迟服务等级要求(LatencySLA)、负载服务等级要求(LoadSLA)、年龄(Age),M为整数。此外,服务信息可以包括Egress节点对应的代价值(Cost)。
示例性的,第二列表可以如表3所示。
表3
Figure PCTCN2020096150-appb-000002
Figure PCTCN2020096150-appb-000003
一种示例性说明,Ingress节点可以根据第二列表记录的M个Egress节点的服务信息,在M个Egress节点中选择多个Egress节点,例如,可以选择M个Egress节点中Cost小的多个Egress节点,又例如,可以选择M个Egress节点中Cost小且Connections少的多个Egress节点,等等。当然,Ingress节点可以采用其他的选择方法结合每个Egress节点的服务信息,在M个Egress节点中选择出多个Egress节点,这里不再一一列举。
在一些实施例中,Egress节点在接收来自服务节点的服务注册请求后,可以基于该服务节点的注册信息更新该Egress节点的本地服务注册表,其中,本地服务注册表可以各个服务节点的注册信息,服务节点的注册信息可以包括服务节点的Service IP、服务节点的主机IP(HostIP)、服务节点的端口(Port)、服务节点的Capacity、服务节点的Connections、服务节点的Latency、服务节点的Age、服务节点的MaxAge等,示例性的,本地服务注册表可以如表4所示,并基于更新后的本地服务注册表更新该Egress节点记录的第二列表中。该Egress节点可以通过网络协议(例如开放式最短路径优先(open shortest path first,OSPF)、边界网关协议(border gateway protocol,BGP)等)向外扩散该Egress节点的服务信息(服务泛洪),即将该Egress节点的服务信息发送给其他Egress节点。其他Egress节点在接收到该Egress节点的服务信息后,基于该Egress节点的服务信息更新自身记录的第二列表,并将向外扩散接收到该Egress节点更新后的服务信息。示例性的,如图3所示。
表4
Service IP HostIP Port Capacity Connections Latency Age MaxAge
10.10.10.1 192.168.1.20 1234 10 6 0ms 42 100s
10.10.10.2 192.168.1.20 5678 10 4 0ms 72 100s
10.10.10.1 192.168.1.22 7846 10 3 0ms 87 100s
其中,Egress节点可以负责与Service A节点实时进行服务负载同步(即Egress节点需负责汇聚该节点下所有Service A节点的容量以及负载),并进行Egress节点到各Service A节点的网络RTT测量。
此外,Ingress节点还可以将首个报文复制成多份向多个Egress节点分别进行发送。
其中,Ingress节点向Egress节点发送首个报文时可以随包发送查询测量报文,例如,可以将查询测量报文嵌入到首个报文的包头里面进行发送。
或者,Ingress节点向Egress节点发送首个报文时也可以随路发送查询测量报文,即将 查询测量报文作为首个报文的随路报文进行发送,也就是在同一链路中发送查询测量报文以及首个报文。具体的,Ingress节点可以先发送首个报文再发送查询测量报文,或者,也可以先发送查询测量报文再发送首个报文。
S203,Ingress节点接收多个Egress节点分别发送的应答报文,应答报文携带Egress节点的计算性能信息和/或网络性能信息。
具体实施中,Egress节点在接收到查询测量报文时,打印时间戳并向Ingress节点返回在Egress节点下所有Service A节点的Connections以及Capacity。
S204,Ingress节点基于多个Egress节点的计算性能信息和/或网络性能信息在多个Egress节点中确定目标Egress节点,该目标网络出口节点用于实现用户设备与服务节点之间的报文路由,该服务节点用于为用户设备提供目标服务。
具体的,针对多个Egress节点中的任一Egress节点,Ingress节点可以基于该Egress节点的计算性能和/或网络性能确定Egress节点的服务代价,Ingress节点在多个Egress节点中选择服务代价最小的节点作为目标Egress节点。
具体来说,Ingress节点在收到多个Egress节点分别发送的应答报文后,可以计算各Egress节点的服务代价,并选择服务代价最低的Egress节点作为目标Egress节点。此外,在确定目标Egress节点后,可以舍弃其他候选Egress节点(即该多个Egress节点中除目标Egress节点以外的其他Egress节点)。例如,若首个报文为TCP_SYN报文,则Ingress节点在确定目标Egress节点后,可以向其他候选Egress节点发送连接复位TCP_RST报文。其他候选Egress节点在收到Service A对应的TCP_RST报文后需转发并清除流表。
具体实施中,对于负载敏感业务,Ingress节点可以不考虑网络性能(例如网络时延、网络拥塞等)的影响,从而可以选择计算性能最好(如负载最轻)的Egress节点作为目标Egress节点。
对于时延敏感业务,Ingress节点可以不考虑计算性能(例如服务器负载等)的影响,从而可以选择网络性能最好(如网络时延最小)的Egress节点作为目标Egress节点。或者,也可以综合考虑计算性能以及网络性能,通过融合运算得到服务代价,从而可以选择服务代价最小的Egress节点作为目标Egress节点。
以计算性能为服务器负载,网络性能为网络时延为例,Ingress节点在综合计算性能以及网络性能确定Egress节点的服务代价时可以结合如下参数:时延权重(LatencyWeight),负载权重(LoadWeight),LatencySLA,LoadSLA。
作为一种示例性说明,Ingress节点结合LatencyWeight,LoadWeight,LatencySLA,LoadSLA计算Egress节点的服务代价(Cost)的函数具体可以为:
if(Latency<=LatencySLA)&&(Load<=LoadSLA):
Cost=LatencyWeight*Latency/LatencySLA+LoadWeight*(Connections+1)/Capacity
else
Cost=Maximum
其中,Cost=Maximum表明服务不可接入(即不满足服务SLA)。
进一步的,若目标Egress节点连接多个Service A节点,则目标Egress节点需要在本地计算各个Service A节点的代价并选择代价最低的Service A节点接入。此外,目标Egress节点还可以在记录的第一列表中添加用户设备到该Service A节点的路由转发记录。
为了更好的理解本申请实施例,下面结合具体场景对服务路由过程进行详细说明。
如图4所示,服务路由过程包括:
S401,Ingress节点接收用户设备发送的报文。其中,该报文的源IP地址为该用户设备的IP地址,目标IP地址为Service A对应的虚拟IP地址。执行步骤S402。
S402,Ingress节点确定第一列表中是否包括该用户设备的IP地址到Service A对应的虚拟IP地址的路由转发记录。若是,执行步骤S403。若否,执行步骤S404。
S403,Ingress节点根据第一列表中该用户设备的IP地址到Service A对应的虚拟IP地址的路由转发记录转发该报文。
S404,Ingress节点根据第二列表向多个Egress节点发送该报文以及OAM查询测量报文,其中,OAM查询测量报文中携带针对Service A的计算性能的测量与请求以及针对Service A的网络性能的测量与请求。并,在第一列表中添加该用户设备的IP地址到Service A对应的虚拟IP地址的路由转发记录。执行步骤S405。
其中,该报文以及OAM查询测量报文可以随路发送,也可以随包发送。
示例性的,第二列表可以基于图3所示的过程创建更新的。
S405,Ingress节点接收该多个Egress节点发送的OAM应答报文。其中,OAM应答报文携带Egress节点的计算性能信息和网络性能信息。执行步骤S406。
S406,Ingress节点选择最好的Egress节点。具体的,Ingress节点可以根据各个Egress节点的计算性能信息和网络性能信息确定各个Egress节点对应的代价值,并选择代价值最小的Egress节点作为最好的Egress节点(即目标Egress节点)。
基于与方法实施例的同一发明构思,本申请实施例提供一种服务路由装置,该服务路由装置的结构可以如图5所示,包括处理单元501和第一收发单元502、第二收发单元503。该装置具体用于实现图2~图4实施例中网络入口节点的功能,该装置可以是网络入口节点本身,也可以是网络入口节点中的芯片或芯片组或芯片中用于执行相关方法功能的一部分。具体的,第一收发单元502,用于接收用户设备发送的首个报文,首个报文用于请求目标服务。第二收发单元503,用于向多个网络出口节点分别发送查询测量报文,查询测量报文用于查询网络出口节点对于目标服务的计算性能和/或网络性能,以及,接收多个网络出口节点分别发送的应答报文,应答报文携带网络出口节点对于目标服务的计算性能信息和/或网络性能信息;处理单元501,用于基于多个网络出口节点的计算性能信息和/或网络性能信息在多个网络出口节点中确定目标网络出口节点,目标网络出口节点用于实现用户设备与服务节点之间的报文路由,服务节点用于为用户设备提供目标服务。
示例性的,查询测量报文可以为网络查询测量报文,网络查询测量报文包含针对目标服务的计算性能的测量与请求,和/或,针对目标服务的网络性能的测量与请求;应答报文可以为网络查询测量应答报文。
在一些实施例中,处理单元,可以具体用于:针对多个网络出口节点中的任一网络出口节点,基于网络出口节点对于目标服务的计算性能信息和/或网络性能信息确定网络出口节点的服务代价;在多个网络出口节点中选择服务代价最小的节点作为目标网络出口节点。
第二收发单元503,还可以用于将首个报文复制成多份向多个网络出口节点分别进行发送。
进一步的,第二收发单元503,在向多个网络出口节点分别发送查询测量报文时,具体 可以用于:将查询测量报文作为首个报文的随路报文向多个网络出口节点进行发送;或者,将查询测量报文作为首个报文的随包报文向多个网络出口节点进行发送。
此外,第二收发单元503,在基于多个网络出口节点的计算性能信息和/或网络性能信息在多个网络出口节点中确定目标网络出口节点之后,还可以用于:向多个网络出口节点中除目标网络出口节点以外的其他网络出口节点发送连接复位报文。
一种示例性说明中,第一收发单元502,具体可以用于:接收用户设备发送的报文。处理单元501,还用于基于第一列表确定报文为用户设备请求目标服务的首个报文,第一列表未包括源IP地址为用户设备的IP地址、且目的IP地址为目标服务对应的IP地址的路由转发记录。
处理单元501,在向多个网络出口节点分别发送查询测量报文之前,还可以用于:基于第二列表在M个网络出口节点中选择出多个网络出口节点,第二列表包括M个网络出口节点的服务信息,服务信息包括如下信息中的一个或多个:用户连接数、用户连接容量、本地优先调度属性、网络延迟服务等级要求、负载服务等级要求,M为整数。
本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,另外,在本申请各个实施例中的各功能模块可以集成在一个处理器中,也可以是单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
其中,集成的模块既可以采用硬件的形式实现时,服务路由装置可以如图6所示,处理单元501可以为处理器602。处理器602,可以是一个中央处理单元(central processing unit,CPU),或者为数字处理模块等等。第一收发单元502可以为第一通信接口601a,第二收发单元502可以为第二通信接口601b,第一通信接口601a、第二通信接口601b可以是收发器、也可以为接口电路如收发电路等、也可以为收发芯片等等。该服务路由装置还包括:存储器603,用于存储处理器601执行的程序。存储器603可以是非易失性存储器,比如硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD)等,还可以是易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM)。存储器603是能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。
处理器602用于执行存储器603存储的程序代码,具体用于执行上述处理单元501的动作,本申请在此不再赘述。
本申请实施例中不限定上述第一通信接口601a、第二通信接口601b、处理器602以及存储器603之间的具体连接介质。本申请实施例在图6中以存储器603、处理器602以及第一通信接口601a、第二通信接口601b之间通过总线604连接,总线在图6中以粗线表示,其它部件之间的连接方式,仅是进行示意性说明,并不引以为限。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图6中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
本领域内的技术人员应明白,本申请的实施例可提供为方法、***、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程 序产品的形式。
本申请是参照根据本申请实施例的方法、设备(***)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
显然,本领域的技术人员可以对本申请实施例进行各种改动和变型而不脱离本申请实施例的范围。这样,倘若本申请实施例的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (18)

  1. 一种服务路由方法,其特征在于,包括:
    网络入口节点接收用户设备发送的首个报文,所述首个报文用于请求目标服务;
    所述网络入口节点向多个网络出口节点分别发送查询测量报文,所述查询测量报文用于查询所述网络出口节点对于所述目标服务的计算性能和/或网络性能;
    所述网络入口节点接收所述多个网络出口节点分别发送的应答报文,所述应答报文携带所述网络出口节点对于所述目标服务的计算性能信息和/或网络性能信息;
    所述网络入口节点基于所述多个网络出口节点的计算性能信息和/或网络性能信息在所述多个网络出口节点中确定目标网络出口节点,所述目标网络出口节点用于实现所述用户设备与服务节点之间的报文路由,所述服务节点用于为所述用户设备提供所述目标服务。
  2. 如权利要求1所述的方法,其特征在于,所述查询测量报文为网络查询测量报文,所述网络查询测量报文包含针对所述目标服务的计算性能的测量与请求,和/或,针对所述目标服务的网络性能的测量与请求;
    所述应答报文为网络查询测量应答报文。
  3. 如权利要求1或2所述的方法,其特征在于,所述网络入口节点基于所述多个网络出口节点对于所述目标服务的计算性能信息和/或网络性能信息在所述多个网络出口节点中确定目标网络出口节点,包括:
    针对所述多个网络出口节点中的任一网络出口节点,所述网络入口节点基于所述网络出口节点对于所述目标服务的计算性能信息和/或网络性能信息确定所述网络出口节点的服务代价;
    所述网络入口节点在所述多个网络出口节点中选择服务代价最小的节点作为所述目标网络出口节点。
  4. 如权利要求1至3任一项所述的方法,其特征在于,所述方法还包括:
    所述网络入口节点将所述首个报文复制成多份向所述多个网络出口节点分别进行发送。
  5. 如权利要求4所述的方法,其特征在于,所述网络入口节点向多个网络出口节点分别发送查询测量报文,包括:
    所述网络入口节点将所述查询测量报文作为所述首个报文的随路报文向所述多个网络出口节点进行发送;或者
    所述网络入口节点将所述查询测量报文作为所述首个报文的随包报文向所述多个网络出口节点进行发送。
  6. 如权利要求4或5所述的方法,其特征在于,在所述网络入口节点基于所述多个网络出口节点的计算性能信息和/或网络性能信息在所述多个网络出口节点中确定目标网络出口节点之后,所述方法还包括:
    所述网络入口节点向所述多个网络出口节点中除所述目标网络出口节点以外的其他网络出口节点发送连接复位报文。
  7. 如权利要求1至6任一项所述的方法,其特征在于,网络入口节点接收用户设备发送的首个报文,包括:
    所述网络入口节点接收所述用户设备发送的报文;
    所述网络入口节点基于第一列表确定所述报文为所述用户设备请求所述目标服务的首 个报文,所述第一列表未包括源网络协议IP地址为所述用户设备的IP地址、且目的IP地址为所述目标服务对应的IP地址的路由转发记录。
  8. 如权利要求1至7任一项所述的方法,其特征在于,在所述网络入口节点向多个网络出口节点分别发送查询测量报文之前,所述方法还包括:
    所述网络入口节点基于第二列表在M个网络出口节点中选择出所述多个网络出口节点,所述第二列表包括所述M个网络出口节点的服务信息,所述服务信息包括如下信息中的一个或多个:用户连接数、用户连接容量、本地优先调度属性、网络延迟服务等级要求、负载服务等级要求,所述M为整数。
  9. 一种服务路由装置,其特征在于,包括:
    第一通信接口,用于所述服务路由装置与用户设备之间的数据传输;
    第二通信接口,用于所述服务路由装置与网络出口节点之间的数据传输;
    处理器,用于执行:
    通过所述第一通信接口接收所述用户设备发送的首个报文,所述首个报文用于请求目标服务;
    通过所述第二通信接口向多个所述网络出口节点分别发送查询测量报文,所述查询测量报文用于查询所述网络出口节点对于所述目标服务的计算性能和/或网络性能;
    通过所述第二通信接口接收所述多个网络出口节点分别发送的应答报文,所述应答报文携带所述网络出口节点对于所述目标服务的计算性能信息和/或网络性能信息;
    基于所述多个网络出口节点的计算性能信息和/或网络性能信息在所述多个网络出口节点中确定目标网络出口节点,所述目标网络出口节点用于实现所述用户设备与服务节点之间的报文路由,所述服务节点用于为所述用户设备提供所述目标服务。
  10. 如权利要求9所述的服务路由装置,其特征在于,所述查询测量报文为网络查询测量报文,所述网络查询测量报文包含针对所述目标服务的计算性能的测量与请求,和/或,针对所述目标服务的网络性能的测量与请求;
    所述应答报文为网络查询测量应答报文。
  11. 如权利要求9或10所述的服务路由装置,其特征在于,所述处理器,在基于所述多个网络出口节点的计算性能信息和/或网络性能信息在所述多个网络出口节点中确定目标网络出口节点时,具体用于:
    针对所述多个网络出口节点中的任一网络出口节点,基于所述网络出口节点对于所述目标服务的计算性能信息和/或网络性能信息确定所述网络出口节点的服务代价;
    在所述多个网络出口节点中选择服务代价最小的节点作为所述目标网络出口节点。
  12. 如权利要求9至11任一项所述的服务路由装置,其特征在于,所述处理器,还用于:
    将所述首个报文复制成多份并通过所述第二通信接口向所述多个网络出口节点分别进行发送。
  13. 如权利要求12所述的服务路由装置,其特征在于,所述处理器,在通过所述第二通信接口向多个网络出口节点分别发送查询测量报文时,具体用于:
    将所述查询测量报文作为所述首个报文的随路报文通过所述第二通信接口向所述多个网络出口节点进行发送;或者
    将所述查询测量报文作为所述首个报文的随包报文通过所述第二通信接口向所述多个网络出口节点进行发送。
  14. 如权利要求12或13所述的服务路由装置,其特征在于,所述处理器,在基于所述多个网络出口节点的计算性能信息和/或网络性能信息在所述多个网络出口节点中确定目标网络出口节点之后,还用于:
    通过所述第二通信接口向所述多个网络出口节点中除所述目标网络出口节点以外的其他网络出口节点发送连接复位报文。
  15. 如权利要求9至14任一项所述的服务路由装置,其特征在于,所述处理器,在通过所述第一通信接口接收所述用户设备发送的首个报文,包括:
    通过所述第一通信接口接收所述用户设备发送的报文;
    基于第一列表确定所述报文为所述用户设备请求所述目标服务的首个报文,所述第一列表未包括源网络协议IP地址为所述用户设备的IP地址、且目的IP地址为所述目标服务对应的IP地址的路由转发记录。
  16. 如权利要求9至15任一项所述的服务路由装置,其特征在于,所述处理器,在通过所述第二通信接口向多个网络出口节点分别发送查询测量报文之前,所述服务路由装置还包括:
    基于第二列表在M个网络出口节点中选择出所述多个网络出口节点,所述第二列表包括所述M个网络出口节点的服务信息,所述服务信息包括如下信息中的一个或多个:用户连接数、用户连接容量、本地优先调度属性、网络延迟服务等级要求、负载服务等级要求,所述M为整数。
  17. 一种计算机可读存储介质,其特征在于,包括指令,当其在计算机上运行时,使得计算机执行如权利要求1至8任一项所述的方法。
  18. 一种芯片,其特征在于,所述芯片包括处理器和通信接口;
    所述通信接口用于接收代码指令,并传输到所述处理器;
    所述处理器,用于调用所述通信接口传输的代码指令以执行权利要求1至8任一项所述的方法。
PCT/CN2020/096150 2019-06-14 2020-06-15 一种服务路由方法及装置 WO2020249128A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910517956.1 2019-06-14
CN201910517956.1A CN112087382B (zh) 2019-06-14 2019-06-14 一种服务路由方法及装置

Publications (1)

Publication Number Publication Date
WO2020249128A1 true WO2020249128A1 (zh) 2020-12-17

Family

ID=73734403

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/096150 WO2020249128A1 (zh) 2019-06-14 2020-06-15 一种服务路由方法及装置

Country Status (2)

Country Link
CN (1) CN112087382B (zh)
WO (1) WO2020249128A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112866107A (zh) * 2021-01-25 2021-05-28 网宿科技股份有限公司 Ip地址通告方法、流量引导方法及网络设备

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116155829A (zh) * 2021-11-19 2023-05-23 贵州白山云科技股份有限公司 网络流量处理方法、装置、介质及电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1245614A (zh) * 1996-12-13 2000-02-23 艾利森电话股份有限公司 动态业务分配
CN102752128A (zh) * 2012-04-19 2012-10-24 杭州华三通信技术有限公司 一种mpls te隧道故障检测方法及其设备
US9071514B1 (en) * 2012-12-17 2015-06-30 Juniper Networks, Inc. Application-specific connectivity loss detection for multicast virtual private networks
CN105991459A (zh) * 2015-02-15 2016-10-05 上海帝联信息科技股份有限公司 Cdn节点回源路由分配方法、装置和***

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101287105B (zh) * 2008-06-03 2011-05-25 中兴通讯股份有限公司 边缘epg服务器负载均衡方法和装置、用户登陆的实现方法
US10374944B2 (en) * 2017-09-25 2019-08-06 Futurewei Technologies, Inc. Quality of service for data transmission
CN108494612B (zh) * 2018-01-19 2021-06-08 西安电子科技大学 一种提供移动边缘计算服务的网络***及其服务方法
CN108282801B (zh) * 2018-01-26 2021-03-30 重庆邮电大学 一种基于移动边缘计算的切换管理方法
CN109361600B (zh) * 2018-04-20 2021-08-10 ***通信有限公司研究院 一种获取路径标识的方法和设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1245614A (zh) * 1996-12-13 2000-02-23 艾利森电话股份有限公司 动态业务分配
CN102752128A (zh) * 2012-04-19 2012-10-24 杭州华三通信技术有限公司 一种mpls te隧道故障检测方法及其设备
US9071514B1 (en) * 2012-12-17 2015-06-30 Juniper Networks, Inc. Application-specific connectivity loss detection for multicast virtual private networks
CN105991459A (zh) * 2015-02-15 2016-10-05 上海帝联信息科技股份有限公司 Cdn节点回源路由分配方法、装置和***

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112866107A (zh) * 2021-01-25 2021-05-28 网宿科技股份有限公司 Ip地址通告方法、流量引导方法及网络设备

Also Published As

Publication number Publication date
CN112087382A (zh) 2020-12-15
CN112087382B (zh) 2022-03-29

Similar Documents

Publication Publication Date Title
US11165879B2 (en) Proxy server failover protection in a content delivery network
WO2020228469A1 (zh) 一种移动边缘计算节点的选择方法、装置及***
WO2020228505A1 (zh) 一种移动边缘计算节点的选择方法、装置及***
EP3391628B1 (en) Use of virtual endpoints to improve data transmission rates
US10348639B2 (en) Use of virtual endpoints to improve data transmission rates
US9762494B1 (en) Flow distribution table for packet flow load balancing
JP7252213B2 (ja) コンテキストアウェア型の経路の計算及び選択
KR101987784B1 (ko) 소프트웨어 정의 네트워크를 기반으로 내용 배포 네트워크를 구현하는 방법 및 시스템
WO2018152919A1 (zh) 一种路径选取方法及***、网络加速节点及网络加速***
US8694675B2 (en) Generalized dual-mode data forwarding plane for information-centric network
US20170180217A1 (en) Use of virtual endpoints to improve data tranmission rates
JP7313480B2 (ja) スライスベースネットワークにおける輻輳回避
WO2020249129A1 (zh) 一种网络路由方法及装置
WO2013029569A1 (en) A Generalized Dual-Mode Data Forwarding Plane for Information-Centric Network
WO2021253889A1 (zh) 负载均衡方法、装置、代理设备、缓存设备及服务节点
US10009282B2 (en) Self-protecting computer network router with queue resource manager
WO2017162117A1 (zh) 一种集群精确限速方法和装置
WO2020249128A1 (zh) 一种服务路由方法及装置
JP2015523020A (ja) DiameterシグナリングルータにおいてDiameterメッセージをルーティングするための方法、システムおよびコンピュータ読取可能媒体
US10382339B2 (en) Large scale bandwidth management of IP flows using a hierarchy of traffic shaping devices
CN116633934A (zh) 负载均衡方法、装置、节点及存储介质
WO2016206751A1 (en) Method and apparatus for managing traffic received from a client device in a communication network
CN110601989A (zh) 一种网络流量均衡方法及装置
EP3258665A1 (en) Network storage method, switch device, and controller
Saifullah et al. Open flow-based server load balancing using improved server health reports

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20822240

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20822240

Country of ref document: EP

Kind code of ref document: A1