WO2021088592A1 - 业务分流方法、装置及***以及电子设备和存储介质 - Google Patents

业务分流方法、装置及***以及电子设备和存储介质 Download PDF

Info

Publication number
WO2021088592A1
WO2021088592A1 PCT/CN2020/120325 CN2020120325W WO2021088592A1 WO 2021088592 A1 WO2021088592 A1 WO 2021088592A1 CN 2020120325 W CN2020120325 W CN 2020120325W WO 2021088592 A1 WO2021088592 A1 WO 2021088592A1
Authority
WO
WIPO (PCT)
Prior art keywords
offload
address
service flow
flow data
response message
Prior art date
Application number
PCT/CN2020/120325
Other languages
English (en)
French (fr)
Inventor
张卓筠
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to JP2022518308A priority Critical patent/JP7427082B2/ja
Priority to EP20885398.6A priority patent/EP3985931A4/en
Priority to KR1020227008280A priority patent/KR20220039814A/ko
Publication of WO2021088592A1 publication Critical patent/WO2021088592A1/zh
Priority to US17/451,746 priority patent/US20220038378A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2441Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/45Network directories; Name-to-address mapping
    • H04L61/4505Network directories; Name-to-address mapping using standardised directories; using standardised directory access protocols
    • H04L61/4511Network directories; Name-to-address mapping using standardised directories; using standardised directory access protocols using domain name system [DNS]

Definitions

  • This application relates to the field of computer and communication technology, and specifically, to business offloading methods, devices and systems, electronic equipment and storage media.
  • a solution for offloading of the local network is proposed to realize the routing of a specific service flow to the local network.
  • the main offloading schemes include the Uplink Classifier (UL CL) scheme and the Internet Protocol Version 6 (IPv6 Internet Protocol Version 6, IPv6) multi-homing scheme.
  • a service offloading method includes:
  • the uplink service flow data packet of the terminal device is offloaded to the edge network, and the destination address of the uplink service flow data packet is the offload address.
  • a service offloading device including:
  • the detection module is used to detect the received service flow data according to preset rules to obtain the detection result
  • a generating module configured to, if the detection result meets the preset rule, use the network address in the service flow data as a distribution address to generate a distribution strategy
  • the offload module is configured to offload the uplink service flow data packet of the terminal device to the edge network according to the offload strategy, and the destination address of the uplink service flow data packet is the offload address.
  • a service offloading system includes a terminal device, a user plane function entity, a session management function entity, and an edge network, wherein:
  • the terminal device is configured to send an uplink service flow data packet to the user plane functional entity, and receive a downlink service flow data packet sent by the user plane functional entity;
  • the user plane function entity is used to detect the received service flow data according to preset rules to obtain the detection result, and if the detection result meets the preset rule, then the network in the service flow data
  • An address is used as an offload address to generate an offload strategy, and offload an uplink service flow data packet of a terminal device to an edge network according to the offload strategy, and the destination address of the uplink service flow data packet is the offload address;
  • the session management function entity is configured to receive the offload strategy sent by the user plane function entity, and return a response message agreeing or rejecting the offload strategy to the user plane function entity;
  • the edge network is configured to receive the uplink service flow data packet offloaded by the user plane function entity, and send the downlink service flow data packet to the user plane function entity.
  • an electronic device including a memory and a processor
  • a computer program is stored in the memory
  • the processor is configured to execute the computer program to enable the electronic device to implement the method described in any embodiment of the present application.
  • a computer-readable storage medium having a computer program stored on the computer-readable storage medium, and when the computer program is executed by a processor, an electronic device including the processor is Implement the method described in any embodiment of this application.
  • Fig. 1 shows a system architecture diagram of the application of a service offloading method according to an exemplary embodiment of the present application.
  • Fig. 2 shows a system architecture diagram of the application of the service offloading method according to an exemplary embodiment of the present application.
  • Fig. 3 shows a flowchart of a service offloading method according to an embodiment of the present application.
  • Fig. 4 shows a flowchart of a service offloading method according to an embodiment of the present application.
  • Fig. 5 shows a flowchart of a service offloading method according to an embodiment of the present application.
  • Fig. 6 shows a flowchart of a service offloading method according to an embodiment of the present application.
  • Fig. 7 shows a flowchart of a service offloading method according to an embodiment of the present application.
  • Fig. 8 shows a schematic diagram of the interaction process of the service offloading system according to an embodiment of the present application.
  • Fig. 9 shows a block diagram of a service offloading device according to an embodiment of the present application.
  • Fig. 10 shows a schematic structural diagram of a computer system of an electronic device according to an embodiment of the present application.
  • a solution for offloading of a local network ie, edge network
  • the main offloading schemes include the uplink classifier scheme and the Internet Protocol version 6 multi-homing scheme.
  • SMF Session Management Function
  • the implementation of the uplink classifier scheme is as follows:
  • the session management function entity may decide to insert an uplink classifier on the data path of the packet data unit (Packet Data Unit, PDU for short) session.
  • the uplink classifier is a function supported by the User Plane Function (UPF) entity, which is used to offload part of the data to the local network according to the data filter issued by the session management function entity.
  • the insertion or deletion of an uplink classifier is determined by the session management function entity, and the session management function entity controls the user plane function entity through the N4 interface.
  • the session management function entity may decide to insert a user plane function entity supporting the uplink classifier in the data path of the packet data unit session when the packet data unit session connection is established, or after the packet data unit session is established.
  • the session management function entity may decide to delete a user plane function entity supporting the uplink classifier on the data path of the packet data unit session after the establishment of the packet data unit session, or delete the uplink on the user plane function entity supporting the uplink classifier.
  • Classifier function may include one or more user plane function entities on the data path of the packet data unit session.
  • the uplink classifier provides uplink data forwarding to the local network or the core user plane function entity, and aggregates the data sent to the user equipment, that is, the data sent to the user equipment from the local network and the core user plane function entity Perform polymerization. This operation is based on the data distribution rules provided by the data detection and session management functional entities.
  • the upstream classifier applies filtering rules (for example, detects the Internet Protocol (IP) address/prefix of the upstream classifier data packets sent by the user equipment) and determines how these data packets are routed.
  • IP Internet Protocol
  • the configuration of the offloading rules is implemented through the session management function entity issuing to the user plane function entity, or the offloading rules are sent to the session management function entity through the application function (AF), and then the session management The functional entity is issued to the user plane functional entity.
  • the configuration method of this shunt rule is usually a static configuration method.
  • FIG. 1 shows a schematic diagram of an exemplary system architecture to which the technical solutions of the embodiments of the present invention can be applied.
  • the system architecture may include a terminal device 101, a first user plane function entity 102, a second user plane function entity 103, a server 104, an edge network 105, and a session management function entity 106.
  • the terminal device 101 can be the mobile phone shown in FIG.
  • the first user plane function entity 102 is connected to the edge computing device in the edge network 105 and is used to forward the communication data between the terminal device 101 and the edge network 105;
  • the second user plane function entity 103 is connected to the server 104 and is used to forward the difference between the terminal device 101 and the server 104 Communication data between.
  • terminal devices first user plane functional entities, second user plane functional entities, servers, edge computing devices, and session management functional entities in FIG. 1 are merely illustrative. According to implementation needs, there may be any number of terminal devices, first user plane functional entities, second user plane functional entities, servers, edge networks, session management functional entities, etc. researchers in this field can understand that, Figure 1 does not include access devices such as base stations.
  • the session connection in this solution includes access devices such as base stations or non-3GPP access (such as WiFi), that is, the terminal device is first connected to the access devices such as the base station, and then the access device is connected to the user plane function Entity.
  • access devices such as base stations or non-3GPP access (such as WiFi)
  • the first user plane function entity 102 presets service flow detection rules. After the terminal device 101 or the server 104 sends service flow data to the first user plane function entity 102, the first user plane function entity 102 102 detects the service flow data according to the preset rule, and if the detection result meets the preset rule, the first user plane function entity 102 extracts the network address in the service flow data as a distribution address to generate a distribution strategy. The first user plane function entity 102 offloads the uplink service flow data packet sent by the terminal device 101 with the offload address to the edge network 105 according to the offload strategy.
  • the first user plane function entity 102 after the first user plane function entity 102 generates the offload strategy, it reports the offload strategy to the session management function entity 106, and the session management function entity 106 makes a decision on the offload strategy. If the session management function The entity 106 agrees to the offload strategy, and returns a response message agreeing to the offload strategy to the first user plane function entity 102. After receiving the agreed response message, the first user plane function entity 102 determines to configure the offload strategy; if the session management function entity 106 If the offload strategy is disapproved, it returns a response message rejecting the offload strategy to the first user plane function entity 102. After receiving the rejection response message, the first user plane function entity 102 determines not to configure the offload strategy.
  • Fig. 2 shows a system architecture diagram of the application of the service offloading method according to an exemplary embodiment of the present application.
  • the difference between the system architecture shown in FIG. 2 and the system architecture described in FIG. 1 is that the system architecture shown in FIG. 2 is a scenario where the service flow data detected by the first user plane functional entity is a DNS response message.
  • a service scheduler 107 is added, and the server 104 is a DNS server.
  • the terminal device 101 Based on the system architecture shown in FIG. 2, in an embodiment of the present invention, the terminal device 101 generates a DNS request and sends it to the first user plane functional entity 102, and the first user plane functional entity 102 forwards the DNS request to The second user plane function entity 103 is further sent by the second user plane function entity 103 to the DNS server 104.
  • the DNS server 104 generates a DNS response message according to the DNS request, and then passes the second user plane function entity 103 and the first user plane function The entity 102 forwards to the terminal device 101.
  • the DNS server can be GSLB (Global Server Load Balance).
  • the first user plane functional entity 102 after the first user plane functional entity 102 receives the DNS response message forwarded by the second user plane functional entity 103, it detects the DNS response message according to a preset rule, and if the detection result meets the expected Assuming a rule, the network address included in the DNS response message is used as the offload address to generate the offload strategy, and at the same time, the DNS response message is sent to the terminal device 101.
  • the first user plane function entity 102 after the first user plane function entity 102 generates the offload strategy, if the destination address of the service flow data packet sent by the terminal device 101 matches the offload address of the offload strategy, the first user The surface function entity 102 offloads the service flow data packet to the edge network 105.
  • the edge computing device in the edge network 105 can replace the source address of the service flow data packet with the edge The network address of the device is calculated and the modified service flow data packet is sent to the service scheduler 107.
  • the service scheduler 107 after the service scheduler 107 receives the modified service flow data packet, it can identify the source address (that is, the network address of the edge computing device) therein and determine the edge computing according to the service deployment situation. Whether the edge network 105 where the device is located can handle the service access request of the terminal device. If the service scheduler 107 recognizes that the edge network 105 where the edge computing device is located can handle the service access request of the terminal device 101, the generated response message carries the network address of the service server deployed in the edge network 105; if the service scheduler 107 It is recognized that the edge network 105 where the edge computing device is located cannot process the service access request of the terminal device, and the generated response message carries the network address of the service server deployed in the core data center.
  • the source address that is, the network address of the edge computing device
  • the response message may be sent to the edge computing device in the edge network 105, and then The edge computing device replaces the destination address in the response message with the destination address of the terminal device and sends the response message to the first user plane function entity 102.
  • the terminal device 101 can obtain the network address of the service server in the edge network 105 by parsing the response message, and then it can be based on the edge
  • the network address of the service server in the network 105 initiates a service access request to the edge network 105.
  • the edge network in the embodiment of the present invention is located in the edge computing center.
  • the edge computing center is relative to the core data center.
  • the core data center is a centralized data center at the back end. Users can access the core data through the network.
  • the center is used to obtain the required data, but the distance between the user and the core data center may be far, which may increase the service access delay; while the edge data center is located closest to the user and can communicate with the core data through the WAN
  • the center maintains real-time data updates to directly provide users with good services.
  • Fig. 3 shows a flowchart of a service offloading method according to an embodiment of the present application.
  • the service offloading method may be executed by a user plane functional entity, for example, it may be executed by the first user plane functional entity 102 shown in FIG. 1 or FIG. 2.
  • the method includes the following steps:
  • Step S310 Perform detection on the received service flow data according to a preset rule to obtain a detection result
  • Step S320 If the detection result satisfies the preset rule, use the network address in the service flow data as a distribution address to generate a distribution strategy;
  • Step S330 Offload the uplink service flow data packet of the terminal device to the edge network according to the offload strategy, and the destination address of the uplink service flow data packet is the offload address.
  • step S310 the received service flow data is detected according to a preset rule to obtain a detection result.
  • the first user plane functional entity is configured with detection rules for service flow data. After the first user plane functional entity receives the service flow data, it detects the service flow data according to the preset rules to obtain the detection result.
  • Step S320 If the detection result satisfies the preset rule, the network address in the service flow data is used as a distribution address to generate a distribution strategy.
  • the first user plane function entity uses the network address in the service flow data as the offload address to generate the offload strategy, there are the following two subsequent processing methods:
  • the first user plane functional entity After the first user plane functional entity generates the offload strategy, it will report the offload strategy to the session management functional entity to notify the session management functional entity that the offload strategy has been configured on the first user plane functional entity side.
  • the method further includes the following steps:
  • Step S410 Send the offload strategy to the session management function entity
  • Step S420 If a response message for agreeing to the offload strategy is received from the session management function entity, determine to configure the offload strategy
  • Step S430 If a response message of rejecting the offload strategy returned by the session management function entity is received, it is determined not to configure the offload strategy.
  • step S420 after the session management function entity receives the offload strategy report message sent by the first user plane function entity, it makes a decision on the offload strategy according to the strategy configured by the operator. If the session management function entity agrees with the first user plane function When the entity requests the offload strategy, a response message for the approval request is sent to the first user plane functional entity. After receiving the offload policy agree response message returned by the session management functional entity, the first user plane functional entity determines to configure the offload strategy.
  • step S430 if the session management function entity rejects the first user plane function entity's offloading policy request, it sends a response message rejecting the request to the first user plane function entity, and the first user plane function entity receives the session management function entity. After the response message of rejecting the offload strategy is returned, it is determined not to configure the offload strategy.
  • step S330 the uplink service flow data packet of the terminal device is offloaded to the edge network according to the offload strategy, and the destination address of the uplink service flow data packet is the offload address.
  • the upstream service flow data is a service access request sent by a terminal device. Since the first user plane functional entity is configured with an offload strategy, when the destination address of the upstream traffic data packet is the same as the offload address in the offload strategy, the first user plane functional entity will use the offload strategy for the upstream traffic data packet. Shunt to the edge network.
  • the technical solution of the embodiment of the present invention generates a shunt strategy in real time through the detection of the service flow by the user plane function entity, realizes the configuration of flexible and real-time shunt strategy, and meets the specific scheduling requirements for the services deployed in the edge network.
  • step S320 specifically includes the following steps:
  • Step S510 Receive the DNS request sent by the terminal device and send the DNS request to the DNS server;
  • Step S520 Perform detection on the received DNS response message returned by the DNS server to obtain a detection result
  • Step S530 If the detection result satisfies the preset rule, the network address included in the DNS response message is used as a distribution address to generate a distribution strategy.
  • step S510 the DNS request sent by the terminal device is received and the DNS request is sent to the DNS server.
  • the terminal device After the terminal device generates the DNS request, it sends the DNS request to the user-plane functional entity, and the user-plane functional entity sends the DNS request to the DNS server, and the DNS server generates a DNS response message in response to the DNS request. After the DNS server generates the DNS response message, it sends the DNS response message to the user plane functional entity. After the user plane functional entity receives the DNS response message, it sends the DNS response message to the terminal device.
  • the DNS request sent by the terminal device may be sent to the first user plane functional entity through the base station device. After receiving the DNS request, the first user plane functional entity passes through the second user plane functional entity. Forward the DNS request to the DNS server.
  • the first user plane functional entity may be I-UPF
  • the second user plane functional entity may be PSA-UPF.
  • step S520 the received DNS response message returned by the DNS server is detected, and the detection result is obtained.
  • the first user plane function entity After the first user plane function entity receives the DNS response message, it detects the DNS response message according to the preset rule to obtain the detection result.
  • the DNS response message is usually encapsulated with port information, such as port 53 (port 53 is a port opened by the DNS server and mainly used for domain name resolution), so the first user plane function entity can Determine whether the DNS response message is received according to the port information encapsulated in the received data packet.
  • port information such as port 53 (port 53 is a port opened by the DNS server and mainly used for domain name resolution)
  • step S530 if the detection result satisfies a preset rule, the network address included in the DNS response message is used as a distribution address to generate a distribution strategy.
  • generating the offload strategy based on the network address contained in the DNS response message may be to use the network address contained in the DNS response message as the offload address to generate the offload strategy, so that the first user plane function entity can assign the destination address
  • the service flow data packets of the shunt address are shunted to the edge network.
  • step S530 specifically includes step S5301 to step S5302, which are described in detail as follows.
  • Step S5301 if the domain name information in the DNS response message meets a preset condition, use the network address included in the DNS response message as the offload address.
  • the domain name information in the DNS response message satisfies the preset condition may be that the domain name information satisfies the condition of the target domain name, and the target domain name may be pre-registered in the first user plane functional entity by the operator according to the request of the business party. Configure and store in.
  • the domain name information in the DNS response message is the target domain name, it is determined whether to use the network address carried in the DNS response message as the offload address. If the domain name information in the DNS response message is a pre-stored target domain name, the network address in the DNS response message is extracted, and the network address is used as the offload address. If the domain name information in the DNS response message is not a pre-stored target domain name, the network address in the DNS response message is not extracted as the offload address.
  • an offloading strategy is generated according to the offloading address, and the offloading strategy is used to offload the service flow data packets whose destination address is the offloading address sent by the terminal device to the edge network.
  • the following two subsequent processing methods are included:
  • the first user plane functional entity After the first user plane functional entity generates the offload strategy, it will report the offload strategy to the session management functional entity to notify the session management functional entity that the offload strategy has been configured on the first user plane functional entity side.
  • the first user plane function entity sends the DNS response message to the terminal device, so that the terminal device sends the service flow data packet according to the DNS response message.
  • the offload strategy is reported to the session management function entity, and the session management function entity makes a decision on the offload strategy request.
  • the DNS response message is sent to the terminal device.
  • the method further includes:
  • a response message for agreeing to the offloading strategy is received from the session management function entity, it is determined to configure the offloading strategy, and the DNS response message is sent to the terminal device so that the terminal device is The DNS response message sends a service flow data packet.
  • the session management function entity after the session management function entity receives the offload strategy report message sent by the first user plane function entity, it makes a decision on the offload strategy according to the strategy configured by the operator. If the session management function entity agrees with the first user plane If the functional entity requests the offloading strategy, it sends a response message agreeing to the request to the first user plane functional entity. After the first user plane functional entity receives the offloading strategy agreeing response message returned by the session management functional entity, it configures the A shunt strategy, and the DNS response message is sent to the terminal device, so that the terminal device sends the service flow data packet according to the DNS response message. For example, the terminal device may send a service flow data packet to the network address contained in the DNS response message (the network address is the network address of the service scheduler allocated by the DNS server to the terminal device).
  • the session management function entity rejects the request of the first user plane function entity's offloading policy, then it sends a response message rejecting the request to the first user plane function entity, and the first user plane function entity receives the offload policy rejection policy returned by the session management function entity.
  • the DNS response message is still sent to the terminal device, so that the terminal device sends the service flow data packet according to the DNS response message.
  • the first user plane functional entity will not configure Offload strategy, so after receiving the service flow data packet sent by the terminal device, since the offload strategy for the network address of the service flow data packet is not configured, the service flow data packet will not be offloaded to the edge network.
  • the session management function entity agrees to the offload strategy of the first user plane functional entity, the first user plane functional entity will configure the offload strategy.
  • the business flow data packet will be offloaded to the edge network according to the offload strategy, and then sent by the edge computing device to the service scheduler, so that the edge computing device can modify the service flow data packet
  • the source address is used to ensure that the service scheduler recognizes that the service flow data packet can be served by the service server in the edge network, so as to allocate the network address of the service server in the edge network in the response message returned.
  • the first user plane function entity uses the network address contained in the DNS response message as the offload address, and after generating the offload policy, referring to FIG. 7, the method further includes the following steps:
  • Step S710 According to the offloading strategy, offload the service flow data packet whose destination address is the offload address sent by the terminal device to the edge network, and then the edge network sends the service flow data packet to the service scheduler.
  • the network address of the service scheduler is the destination address of the service flow data packet;
  • Step S720 Receive a response message returned by the service scheduler sent by the edge network, and send the response message to the terminal device, so that the terminal device initiates a service according to the network address contained in the response message Access request.
  • the service flow data packet sent by the terminal device carries the destination address of the service flow data packet, that is, the network address of the service scheduler.
  • the first user plane function entity determines whether the destination address of the traffic data packet is the offload address in the offload strategy according to the configured offload strategy. If the destination address of the traffic data packet is not the offload address in the offload strategy, then the service flow The data packet will not be offloaded to the edge network.
  • the service flow data packet will be directly sent to the second user plane functional entity, and then sent by the second user plane functional entity to the service scheduler.
  • the service scheduler does not recognize the edge network
  • the terminal device will be assigned the network address of the service server located in the core data center, so that the terminal device can initiate a service access request to the service server of the core data center. If the destination address of the traffic data packet is the offload address in the offload strategy, the traffic data packet will be offloaded to the edge network so that the edge network can modify the source address in the traffic data packet to ensure that the service scheduler recognizes the The edge network can handle service access requests from terminal devices.
  • the edge computing device in the edge network can replace the source address of the service flow data packet with the edge computing device's source address.
  • Network address and send the modified service flow data packet to the service scheduler.
  • the service scheduler can identify the source address (that is, the network address of the edge computing device) and According to the service deployment, determine whether the edge network where the edge computing device is located can handle the service access request of the terminal device.
  • a response message is generated Carries the network address of the service server deployed in the edge network; if the service scheduler recognizes that the edge network where the edge computing device is located cannot handle the service access request of the terminal device, the generated response message carries the service server deployed in the core data center Network address.
  • step S720 if the service scheduler recognizes that the service server in the edge network can process the service access request of the terminal device, it generates a response message containing the network address of the service server in the edge network, and the service scheduler After the response message is generated, the response message can be sent to the edge computing device in the edge network, and then the edge computing device replaces the destination address in the response message with the destination address of the terminal device and sends it to the first user plane functional entity.
  • the service scheduler if the service scheduler recognizes that the edge network cannot process the service access request of the terminal device, it generates a response message containing the network address of the service server of the core data center, and the service scheduler generates the response message ,
  • the response message can be sent to the edge computing device in the edge network, and the edge computing device replaces the destination address of the response message with the network address of the terminal device, and then sends it to the first user plane functional entity, and then The first user plane function entity sends the response message to the terminal device, so that the terminal device initiates a service access request to the service server of the core data center.
  • the foregoing embodiment generates a distribution strategy based on the network address of the service scheduler in the DNS response message returned by the DNS server.
  • the service flow data packet is forwarded to the service scheduler according to the distribution strategy.
  • the edge network enables the service scheduler to determine whether the edge network can serve the service access request according to the edge network where the network address of the obtained edge computing device is located, and return a response message to the terminal device.
  • the terminal device can follow the information in the response message
  • the network address initiates a service access request.
  • a shunt strategy is generated in real time, and then the service is delivered to the terminal device through the service server in the edge network, which meets the specific service scheduling requirements.
  • delivering services to terminal devices through the service server in the edge network not only reduces the delay of terminal devices accessing services, but also reduces the bandwidth consumption of the core data center.
  • the first user plane function entity after the first user plane function entity sends the response message to the terminal device, it may further include the following steps:
  • the downlink service flow data packet for the terminal device returned by the edge network is received, the downlink service flow data packet is returned to the terminal device.
  • the terminal device can initiate a service access request according to the network address contained in the response message, including the service
  • the uplink service flow data packet of the access request is first sent to the first user plane functional entity.
  • the destination address of the uplink service flow data packet is consistent with the offload address, the first user plane functional entity will offload the uplink service flow data packet to Business server in the edge network.
  • the first user plane function entity returns the downlink service flow data packet to the terminal device after receiving the downlink service flow data packet for the terminal device returned by the edge network.
  • the offload device that is, the first user plane functional entity
  • the first user plane functional entity may return the service response result to the terminal device through the base station or non-3GPP access.
  • Fig. 8 shows a schematic diagram of the interaction process of the service offloading system according to an embodiment of the present application, including the following steps:
  • Step S810 The terminal device 101 sends a DNS request to the DNS server 104, and the DNS request is used to request the DNS server 104 to allocate the network address of the service dispatcher.
  • the terminal device 101 sends a DNS request to the first user plane functional entity 102 through the base station or non-3GPP access, and the first user plane functional entity 102 forwards the DNS request to the second user plane functional entity 103, and then the second user plane functional entity 103 sends the DNS request to the DNS server 104.
  • Step S820 The DNS server 104 selects the corresponding service scheduler 107 according to the DNS request, writes the network address of the service scheduler into the DNS response message, and sends the DNS response message to the second user plane function entity 103 via the Internet, and then the first The second user plane function entity 103 sends the DNS response message to the first user plane function entity 102.
  • Step S830a After receiving the DNS response message, the first user plane function entity 102 detects the DNS response message according to a preset rule. If the detection result meets the preset rule, extract the domain name and the corresponding domain name contained in the DNS response message. If the domain name information satisfies the pre-set rules of the first user plane function entity 102, the network address in the DNS response message is extracted as the offload address, and the offload strategy is generated. The offload strategy is used to match the destination address with the offload address The business flow data packet of the forwarding to the edge network.
  • Step S830b After generating the offload strategy, the first user plane function entity 102 reports the generated offload strategy to the session management function entity 106. In one embodiment, the first user plane function entity 102 informs the session management function entity 106 that the offload strategy has been configured; in another embodiment, the first user plane function entity 102 sends the offload strategy to the session management function entity 106 , The session management function entity 106 makes a decision on the offload strategy.
  • a response message of the approval request is sent to the first user plane function entity 102, and the first user plane function entity 102 determines to configure the offload strategy ; If the request for the offload strategy is not agreed, a response message rejecting the request is sent to the first user plane functional entity 102, and the first user plane functional entity 102 determines not to configure the offload strategy.
  • Step S840 The terminal device 101 receives the DNS response message returned by the first user plane function entity 102.
  • Step S850 After receiving the DNS response message, the terminal device 101 sends the service flow data packet according to the network address of the service scheduler 107 included in the DNS response message.
  • the service flow data packet is first sent by the terminal device 101 to the first user plane function entity 102.
  • the first user plane function entity 102 After receiving the service flow data packet, the first user plane function entity 102 forwards the service flow data packet to the edge network 105 according to the offloading strategy.
  • the edge computing device in the edge network 105 replaces the source address of the service flow data packet with the network address of the edge computing device and sends it to the service scheduler 107, and then after the service scheduler 107 receives the modified service flow data packet,
  • the source address ie, the network address of the edge computing device
  • the source address can be used to identify whether the edge network 105 where the edge computing device is located can process the service access request of the terminal device.
  • Step S860 The service scheduler 107 generates a response message according to the service flow data packet, and returns the response message to the first user plane function entity 102 through the edge network 105, and then the first user plane function entity 102 returns the response message to the terminal device 101 .
  • the service scheduler 107 recognizes that the edge network 105 where the edge computing device is located can handle the service access request of the terminal device, it generates a response message containing the network address of the service server in the edge network 105 , Send the response message to the edge network 105, and then the edge computing device in the edge network 105 replaces the destination address in the response message with the destination address of the terminal device and sends it to the first user plane function entity 102.
  • the generated response message carries the network address of the service server deployed in the core data center .
  • Step S870 After sending the response message to the terminal device 101, the terminal device 101 may initiate a service access request according to the network address of the service server contained in the response message, and the uplink service flow data packet containing the service access request is first sent to the first A user plane function entity 102. Since the destination address of the upstream service flow data packet is consistent with the pre-configured offload address, the first user plane function entity 102 will offload the upstream service flow data packet to the edge network 105.
  • Step S880 The first user plane function entity 102 receives the downlink service flow data packet for the terminal device 101 returned by the edge network 105, and returns the downlink service flow data packet to the terminal device 101.
  • a service offloading apparatus 900 is provided, which is characterized in that the apparatus 900 includes:
  • the detection module 910 is configured to detect the received service flow data according to preset rules to obtain a detection result
  • a generating module 920 configured to, if the detection result meets the preset rule, use the network address in the service flow data as a distribution address to generate a distribution strategy
  • the offload module 930 is configured to offload the uplink service flow data packet of the terminal device to the edge network according to the offload strategy, and the destination address of the uplink service flow data packet is the offload address.
  • the generation module 920 is configured to, if the detection result meets the preset rule, use the network address in the service flow data as the offload address, and after the offload policy is generated,
  • the device also includes:
  • a sending module configured to send the offload strategy to the session management function entity
  • a determining and configuring module configured to determine to configure the offloading strategy if a response message for agreeing to the offloading strategy returned by the session management function entity is received;
  • a determining not to configure a module configured to determine not to configure the offloading policy if a response message of rejecting the offloading policy returned by the session management function entity is received.
  • the generating module 920 when the service flow data is a DNS response message, the generating module 920 includes:
  • a receiving unit configured to receive a DNS request sent by a terminal device and send the DNS request to a DNS server;
  • a detection unit configured to detect the received DNS response message returned by the DNS server to obtain a detection result
  • the generating unit is configured to, if the detection result meets a preset rule, use the network address included in the DNS response message as a distribution address to generate a distribution strategy.
  • the generating unit is further configured to:
  • the offloading strategy is generated according to the offloading address, and the offloading strategy is used to offload the service flow data packets whose destination address is the offloading address sent by the terminal device to the edge network.
  • the device is further configured to:
  • a response message for agreeing to the offloading strategy is received from the session management function entity, it is determined to configure the offloading strategy, and the DNS response message is sent to the terminal device so that the terminal device is The DNS response message sends a service flow data packet.
  • the device is further configured to:
  • the service flow data packet sent by the terminal device whose destination address is the offload address is offloaded to the edge network, and then the edge network sends the service flow data packet to the service scheduler, and the service scheduler
  • the network address of the server is the destination address of the service flow data packet
  • the apparatus after sending the response message to the terminal device, the apparatus is further configured to:
  • the downlink service flow data packet for the terminal device returned by the edge network is received, the downlink service flow data packet is returned to the terminal device.
  • FIG. 10 shows a schematic structural diagram of a computer system suitable for implementing an electronic device of any embodiment of the present disclosure.
  • the computer system 1000 includes a processor or a central processing unit (CPU) 1001, which can be loaded to or from a program stored in a read-only memory (ROM) (also referred to as a storage portion) 1008 or from the storage portion 1008.
  • the program in the random access memory (RAM) 1003 is executed to perform various appropriate actions and processing, for example, the method described in any embodiment of the present disclosure is executed.
  • RAM 1003 various programs and data required for system operation are also stored.
  • the CPU 1001, the ROM 1008, and the RAM 1003 are connected to each other through a bus 1004.
  • An input/output (I/O) interface 1005 is also connected to the bus 1004.
  • the following components are connected to the I/O interface 1005: an input part 1006 including a keyboard, a mouse, etc.; an output part 1007 including a cathode ray tube (CRT), a liquid crystal display (LCD), etc., and speakers, etc.; a storage part 1008 including a hard disk, etc. ; And a communication part 1009 including a network interface card such as a LAN card, a modem, and the like.
  • the communication section 1009 performs communication processing via a network such as the Internet.
  • the driver 1010 is also connected to the I/O interface 1005 as needed.
  • a removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is installed on the drive 1010 as needed, so that the computer program read therefrom is installed into the storage portion 1008 as needed.
  • an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication part 1009, and/or installed from the removable medium 1011.
  • the computer program When the computer program is executed by a processor or a central processing unit (CPU) 1001, it executes various functions defined in the method and device of the present application.
  • the computer system 1000 may further include an AI (Artificial Intelligence) processor, and the AI processor is used to process computing operations related to machine learning.
  • AI Artificial Intelligence
  • the computer-readable medium shown in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable removable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
  • the computer-readable medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wireless, wire, optical cable, RF, etc., or any suitable combination of the above.
  • the technical solution provided by the embodiments of the present application may include the following beneficial effects: the user plane function entity detects the received service flow data according to preset rules, and if the detection result meets the preset rules, the network address in the service flow data As the offload address, a offload strategy is generated, and the uplink service flow data packets of the terminal device are offloaded to the edge network according to the offload strategy. It can be seen that the technical solution of the embodiment of the present invention generates a distribution strategy in real time through the detection of the service flow by the user plane function entity, realizes the configuration of flexible and real-time distribution strategy, and meets specific service requirements.
  • the DNS response message can be detected by the user plane functional entity, and when the preset rule is met, the corresponding address in the DNS response message is configured as the offload address, thereby Upstream service flow data is shunted to edge computing equipment for specific processing to meet specific scheduling requirements for services deployed on the edge network.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of the code, and the above-mentioned module, program segment, or part of the code contains one or more for realizing the specified logical function.
  • Executable instructions may also occur in a different order from the order marked in the drawings. For example, two blocks shown in succession can actually be executed substantially in parallel, and they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram or flowchart, and the combination of blocks in the block diagram or flowchart can be implemented by a dedicated hardware-based system that performs the specified function or operation, or can be implemented by A combination of dedicated hardware and computer instructions is implemented.
  • the units described in the embodiments of the present disclosure may be implemented in software or hardware, and the described units may also be provided in a processor. Among them, the names of these units do not constitute a limitation on the unit itself under certain circumstances.
  • this application also provides a computer-readable medium.
  • the computer-readable medium may be included in the electronic device described in the above-mentioned embodiments; or it may exist alone without being assembled into the electronic device. in.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by an electronic device, the electronic device realizes the method described in the following embodiments. For example, the electronic device can implement the steps shown in FIGS. 3 to 7 and so on.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请提供了业务分流方法、装置及***以及电子设备和存储介质。该方法包括:根据预设规则对接收到的业务流数据进行检测,得到检测结果;若所述检测结果满足所述预设规则,则将所述业务流数据中的网络地址作为分流地址,生成分流策略;根据所述分流策略将终端设备的上行业务流数据包分流至边缘网络,所述上行业务流数据包的目的地址为所述分流地址。

Description

业务分流方法、装置及***以及电子设备和存储介质
本申请要求于2019年11月8日提交国家知识产权局、申请号为201911089188.0,申请名称为“业务分流方法、装置及***”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机及通信技术领域,具体而言,涉及业务分流方法、装置及***以及电子设备和存储介质。
背景技术
目前,在3GPP的5G标准化方案中,提出了针对本地网络(Local Network)的分流的方案,来实现将特定的业务流路由到本地网络。主要的分流方案包括上行分类器(Uplink Classifier,简称UL CL)方案和互联网协议第6版(IPv6 Internet Protocol Version 6,简称IPv6)多归属方案。
发明内容
根据本申请实施例的一方面,提供了一种业务分流方法,所述方法包括:
根据预设规则对接收到的业务流数据进行检测,得到检测结果;
若所述检测结果满足所述预设规则,则将所述业务流数据中的网络地址作为分流地址,生成分流策略;
根据所述分流策略将终端设备的上行业务流数据包分流至边缘网络,所述上行业务流数据包的目的地址为所述分流地址。
根据本申请实施例的一方面,提供了一种业务分流装置,包括:
检测模块,用于根据预设规则对接收到的业务流数据进行检测,得到检测结果;
生成模块,用于若所述检测结果满足所述预设规则,则将所述业务流数据中的网络地址作为分流地址,生成分流策略;
分流模块,用于根据所述分流策略将终端设备的上行业务流数据包分流至边缘网络,所述上行业务流数据包的目的地址为所述分流地址。
根据本申请实施例的一方面,提供了一种业务分流***,所述***包括终端设备、用户面功能实体、会话管理功能实体和边缘网络,其中:
所述终端设备,用于将上行业务流数据包发送至所述用户面功能实体,接收所述用户面功能实体发送的下行业务流数据包;
所述用户面功能实体,用于根据预设规则,对接收到的业务流数据进行检测,得到检测结果,若所述检测结果满足所述预设规则,则将所述业务流数据中的网络地址作为分流地址,生成分流策略,根据所述分流策略将终端设备的上行业务流数据包分流至边缘网络,所述上行业务流数据包的目的地址为所述分流地址;
所述会话管理功能实体,用于接收所述用户面功能实体发送的分流策略,并向所述用户面功能实体返回同意或拒绝所述分流策略的响应消息;
所述边缘网络,用于接收所述用户面功能实体分流的上行业务流数据包,并将下行业务流数据包发送给用户面功能实体。
根据本申请实施例的一方面,提供了一种电子设备,包括存储器和处理器;
所述存储器中存储有计算机程序;
所述处理器用于执行所述计算机程序以使所述电子设备实现本申请任一实施例所述的方法。
根据本申请实施例的一方面,提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时使包括所述处理器的电子设备实现本申请任一实施例所述的方法。
本申请的其它特性和优点将通过下面的详细描述变得显然,或部分 地通过本申请的实践而习得。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性的,并不能限制本申请。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本发明的实施例,并与说明书一起用于解释本发明的原理。显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。在附图中:
图1示出了根据本申请示例性实施例示出的业务分流方法应用的***构架图。
图2示出了根据本申请示例性实施例示出的业务分流方法应用的***构架图。
图3示出了根据本申请一个实施例示出的业务分流方法的流程图。
图4示出了根据本申请一个实施例示出的业务分流方法的流程图。
图5示出了根据本申请一个实施例示出的业务分流方法的流程图。
图6示出了根据本申请一个实施例示出的业务分流方法的流程图。
图7示出了根据本申请一个实施例示出的业务分流方法的流程图。
图8示出了根据本申请一个实施例的业务分流***的交互过程示意图。
图9示出了根据本申请一个实施例的业务分流装置的框图。
图10示出了根据本申请一个实施例的电子设备的计算机***的结构示意图。
具体实施方式
现在将参考附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的范例;相反,提供这些示例实施方式使得本申请的描述将更加全面和完整,并将示例实 施方式的构思全面地传达给本领域的技术人员。附图仅为本申请的示意性图解,并非一定是按比例绘制。图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。
此外,所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多示例实施方式中。在下面的描述中,提供许多具体细节从而给出对本申请的示例实施方式的充分理解。然而,本领域技术人员将意识到,可以实践本申请的技术方案而省略所述特定细节中的一个或更多,或者可以采用其它的方法、组元、步骤等。在其它情况下,不详细示出或描述公知结构、方法、实现或者操作以避免喧宾夺主而使得本申请的各方面变得模糊。
附图中所示的一些方框图是功能实体,不一定必须与物理或逻辑上独立的实体相对应。可以采用软件形式来实现这些功能实体,或在一个或多个硬件模块或集成电路中实现这些功能实体,或在不同网络和/或处理器装置和/或微控制器装置中实现这些功能实体。
目前,在3GPP的5G标准化方案中,提出了针对本地网络(local network)(即边缘网络)的分流的方案,来实现将特定的业务流路由到本地网络。主要的分流方案包括上行分类器方案和互联网协议第6版多归属方案。然而,这两种方案的分流规则的配置均需要在业务流发起前配置在SMF(Session Management Function,会话管理功能,简称SMF)上,属于相对静态的配置方式,无法满足特定业务调度需求。
上行分类器方案的实现方式如下:
在基于上行分类器的分流方案中,会话管理功能实体可以决定在分组数据单元(Packet Data Unit,简称PDU)会话的数据路径上***一个上行分类器。上行分类器是用户面功能(User Plane Function,简称UPF)实体支持的一种功能,用于根据会话管理功能实体下发的数据过滤器来将部分数据分流到本地网络中。***或者删除一个上行分类器是由会话管理功能实体决定的并由会话管理功能实体通过N4接口对用户面功能实体进行控制。会话管理功能实体可以在分组数据单元会话连接建立时, 或者在分组数据单元会话建立完成后,决定在分组数据单元会话的数据路径上***一个支持上行分类器的用户面功能实体。会话管理功能实体可以在分组数据单元会话建立完成后,决定在分组数据单元会话的数据路径上删除一个支持上行分类器的用户面功能实体,或者删除支持上行分类器的用户面功能实体上的上行分类器功能。会话管理功能实体可以在分组数据单元会话的数据路径上包含一个或者多个用户面功能实体。
用户设备(User Equipment,简称UE)不感知数据被上行分类器分流,也不会涉及到***或者删除上行分类器的流程中。上行分类器提供上行数据转发到本地网络或者是核心的用户面功能实体上,并且对发送到用户设备的数据进行聚合,即对来自于本地网络和核心用户面功能实体的发送到用户设备的数据进行聚合。这种操作基于数据检测和会话管理功能实体提供的数据分流规则。上行分类器应用过滤规则(例如检测用户设备发送的上行分类器数据包的网际互联协议(Internet Protocol,简称IP)地址/前缀)并且决定这些数据包如何被路由。
在该方案中,分流规则的配置是通过会话管理功能实体下发给用户面功能实体来实现,或者通过应用功能(Application function,简称AF)将分流规则发送给会话管理功能实体,再由会话管理功能实体下发给用户面功能实体。这种分流规则的配置方法通常属于静态配置的方式。
然而,在实际应用部署中,基于特定的业务场景,对分流规则的设置要求是动态的,需要由用户面功能实体在业务流中检测来对具体的分流规则进行配置。基于静态配置分流规则无法满足特定业务调度需求,为此,本申请特提出了一种业务分流方法。
图1示出了可以应用本发明实施例的技术方案的一个示例性***架构的示意图。
如图1所示,***架构可以包括终端设备101、第一用户面功能实体102、第二用户面功能实体103、服务器104、边缘网络105和会话管理功能实体106。其中,终端设备101可以是图1中所示的手机,还可以是平板电脑、便携式计算机、台式计算机、物联网设备、智能手表等等可以 接入移动通信网络的设备;第一用户面功能实体102连接至边缘网络105中的边缘计算设备,用于转发终端设备101与边缘网络105之间的通信数据;第二用户面功能实体103连接于服务器104,用于转发终端设备101与服务器104之间的通信数据。
应该理解,图1中的终端设备、第一用户面功能实体、第二用户面功能实体、服务器、边缘计算设备、会话管理功能实体的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、第一用户面功能实体、第二用户面功能实体、服务器、边缘网络、会话管理功能实体等。该领域研究人员可以理解的是,图1中没有包含基站等接入设备,由于该方案对接入网络的接入设备没有影响,所以没有在方案中提及,但是本领域人员应该可以理解的是该方案中的会话连接是包含基站或非3GPP接入(如,WiFi)等接入设备的,即终端设备是先连接到基站等接入设备,然后再由接入设备连接到用户面功能实体的。
在本发明的一个实施例中,第一用户面功能实体102预设业务流检测规则,当终端设备101或者服务器104发送业务流数据至第一用户面功能实体102后,第一用户面功能实体102根据预设规则对业务流数据进行检测,若检测结果满足预设规则,则第一用户面功能实体102提取业务流数据中的网络地址作为分流地址,生成分流策略。第一用户面功能实体102根据分流策略将终端设备101发送的目的地址为分流地址的上行业务流数据包分流至边缘网络105。
在本发明的一个实施例中,第一用户面功能实体102生成分流策略之后,将该分流策略上报至会话管理功能实体106,由会话管理功能实体106对该分流策略进行决策,若会话管理功能实体106同意分流策略,则返回同意分流策略的响应消息至第一用户面功能实体102,第一用户面功能实体102在接收到同意的响应消息后,确定配置分流策略;若会话管理功能实体106不同意分流策略,则向第一用户面功能实体102返回拒绝分流策略的响应消息,第一用户面功能实体102在接收到拒绝的响应消息后,确定不配置分流策略。
图2示出了根据本申请示例性实施例示出的业务分流方法应用的***构架图。其中,图2所示的***架构与图1所述的***架构的区别在于图2所示的***架构是应用第一用户面功能实体检测的业务流数据为DNS响应消息的场景下,在图2所示的***架构下,增加了业务调度器107,服务器104为DNS服务器。
基于图2所示的***架构,在本发明的一个实施例中,终端设备101生成DNS请求后将其发送至第一用户面功能实体102,第一用户面功能实体102将该DNS请求转发至第二用户面功能实体103,进而由第二用户面功能实体103发送至DNS服务器104,DNS服务器104根据该DNS请求生成DNS响应消息,然后通过第二用户面功能实体103和第一用户面功能实体102转发至终端设备101。其中DNS服务器可以是GSLB(Global Server Load Balance,全局负载均衡)。
在本发明的一个实施例中,第一用户面功能实体102接收到第二用户面功能实体103转发来的DNS响应消息之后,根据预设规则对该DNS响应消息进行检测,若检测结果满足预设规则,则将DNS响应消息中包含的网络地址作为分流地址,生成分流策略,同时,将该DNS响应消息发送至终端设备101。
在本发明的一个实施例中,在第一用户面功能实体102生成分流策略之后,如果接收到终端设备101发送的业务流数据包的目的地址与分流策略的分流地址相匹配,则第一用户面功能实体102将该业务流数据包分流至边缘网络105,当边缘网络105接收到该业务流数据包之后,边缘网络105中的边缘计算设备可以将该业务流数据包的源地址替换为边缘计算设备的网络地址并将修改后的业务流数据包发送至业务调度器107。
在本发明的一个实施例中,业务调度器107在接收到修改后的业务流数据包之后,可以根据其中的源地址(即边缘计算设备的网络地址)识别并根据业务部署情况判断该边缘计算设备所在的边缘网络105是否能够处理终端设备的业务访问请求。如果业务调度器107识别出边缘计 算设备所在的边缘网络105能够处理终端设备101的业务访问请求,则生成的响应消息中携带该边缘网络105中部署的业务服务器的网络地址;如果业务调度器107识别出边缘计算设备所在的边缘网络105不能处理终端设备的业务访问请求,则生成的响应消息中携带核心数据中心部署的业务服务器的网络地址。
在本发明的一个实施例中,业务调度器107在生成携带该边缘网络105中部署的业务服务器的网络地址的响应消息之后,可以将该响应消息发送给边缘网络105中的边缘计算设备,进而边缘计算设备将该响应消息中的目的地址替换为终端设备的目的地址并将该响应消息发送给第一用户面功能实体102。
在本发明的一个实施例中,终端设备101在接收到第一用户面功能实体发送的响应消息之后,通过解析该响应消息可以获取到边缘网络105中的业务服务器的网络地址,之后可以基于边缘网络105中的业务服务器的网络地址向边缘网络105发起业务访问请求。
需要说明的是,本发明实施例中的边缘网络位于边缘计算中心,边缘计算中心是与核心数据中心相对而言的,核心数据中心是位于后端的集中式数据中心,用户可以通过网络访问核心数据中心以获取所需要的数据,但是用户与核心数据中心之间的距离可能较远,进而可能会增加业务访问时延;而边缘数据中心是处于最接近用户的地方,并且可以通过广域网与核心数据中心保持实时的数据更新,以直接为用户提供良好的服务。
图3示出了根据本申请一个实施例示出的业务分流方法的流程图。该业务分流方法可以由用户面功能实体执行,比如可以由图1或图2中所示的第一用户面功能实体102来执行。参见图3所示,所述方法包括以下步骤:
步骤S310、根据预设规则对接收到的业务流数据进行检测,得到检测结果;
步骤S320、若所述检测结果满足所述预设规则,则将所述业务流数 据中的网络地址作为分流地址,生成分流策略;
步骤S330、根据所述分流策略将终端设备的上行业务流数据包分流至边缘网络,所述上行业务流数据包的目的地址为所述分流地址。
下面对这些步骤进行详细描述。
在步骤S310中,根据预设规则对接收到的业务流数据进行检测,得到检测结果。第一用户面功能实体配置有业务流数据的检测规则,当第一用户面功能实体接收到业务流数据后,根据预设规则对业务流数据进行检测,得到检测结果。
步骤S320、若所述检测结果满足所述预设规则,则将所述业务流数据中的网络地址作为分流地址,生成分流策略。
在本发明的一个实施例中,当第一用户面功能实体将所述业务流数据中的网络地址作为分流地址生成分流策略之后,有以下两种后续处理方式:
第一种方式,当第一用户面功能实体生成分流策略之后,会将该分流策略上报给会话管理功能实体,以通知会话管理功能实体在第一用户面功能实体侧已经配置了分流策略。
第二种方式,当第一用户面功能实体生成分流策略之后,将该分流策略上报给会话管理功能实体,由会话管理功能实体对该分流策略的请求进行决策。在该实施例中,参见图4,在第一用户面功能实体将业务流数据中的网络地址作为分流地址,生成分流策略之后,所述方法还包括以下步骤:
步骤S410、发送所述分流策略至会话管理功能实体;
步骤S420、若接收到所述会话管理功能实体返回的同意所述分流策略的响应消息,则确定配置所述分流策略;
步骤S430、若接收到所述会话管理功能实体返回的拒绝所述分流策略的响应消息,则确定不配置所述分流策略。
在步骤S420中,会话管理功能实体接收到第一用户面功能实体发送的分流策略上报消息后,根据运营商配置的策略对所述分流策略进 行决策,如果会话管理功能实体同意第一用户面功能实体的分流策略的请求,则发送同意请求的响应消息至第一用户面功能实体,第一用户面功能实体接收到会话管理功能实体返回的同意分流策略的响应消息后,则确定配置分流策略。
在步骤S430中,如果会话管理功能实体拒绝第一用户面功能实体的分流策略的请求,则发送拒绝请求的响应消息至第一用户面功能实体,第一用户面功能实体接收到会话管理功能实体返回的拒绝分流策略的响应消息后,则确定不配置分流策略。
继续参见图3,在步骤S330中,根据所述分流策略将终端设备的上行业务流数据包分流至边缘网络,所述上行业务流数据包的目的地址为所述分流地址。
上行业务流数据是终端设备发送的业务访问请求。由于第一用户面功能实体配置了分流策略,所以在上行业务流数据包的目的地址与分流策略中的分流地址一致时,第一用户面功能实体则会根据分流策略将该上行业务流数据包分流至边缘网络。
本发明实施例的技术方案通过用户面功能实体对业务流的检测来实时生成分流策略,实现灵活、实时的分流策略的配置,满足特定的对在边缘网络部署的业务的调度需求。
在本发明的一个实施例中,当第一用户面功能实体检测的业务流数据为DNS响应消息时,参见图5,步骤S320具体包括以下步骤:
步骤S510、接收终端设备发送的DNS请求并发送所述DNS请求至DNS服务器;
步骤S520、对接收到的所述DNS服务器返回的DNS响应消息进行检测,得到检测结果;
步骤S530、若所述检测结果满足预设规则,则将所述DNS响应消息中包含的网络地址作为分流地址,生成分流策略。
在步骤S510中,接收终端设备发送的DNS请求并发送所述DNS请求至DNS服务器。
终端设备在生成DNS请求之后,会将该DNS请求发送至用户面功能实体,进而由用户面功能实体将该DNS请求发送至DNS服务器,并由DNS服务器响应该DNS请求生成DNS响应消息。DNS服务器在生成DNS响应消息之后,将该DNS响应消息发送给用户面功能实体,当用户面功能实体接收到该DNS响应消息之后,发送该DNS响应消息至终端设备。
在本发明的一个实施例中,终端设备发送的DNS请求可以是通过基站设备发送至第一用户面功能实体,第一用户面功能实体在接收到该DNS请求之后,通过第二用户面功能实体将该DNS请求转发至DNS服务器。在该实施例中,第一用户面功能实体可以是I-UPF,第二用户面功能实体可以是PSA-UPF。
继续参见图5,在步骤S520中,对接收到的所述DNS服务器返回的DNS响应消息进行检测,得到检测结果。
当第一用户面功能实体接收到DNS响应消息后,根据预设规则对该DNS响应消息进行检测,得到检测结果。
在本发明的一个实施例中,DNS响应消息通常封装有端口信息,比如封装了53端口(53端口是由DNS服务器开放的、主要用于域名解析的端口),因此第一用户面功能实体可以根据接收到的数据包中封装的端口信息来确定是否接收到DNS响应消息。
在步骤S530中,若所述检测结果满足预设规则,则将所述DNS响应消息中包含的网络地址作为分流地址,生成分流策略。
在本发明的一个实施例中,根据DNS响应消息包含的网络地址生成分流策略可以是将DNS响应消息中包含的网络地址作为分流地址来生成分流策略,以便第一用户面功能实体能够将目的地址为分流地址的业务流数据包分流到边缘网络。
在本发明的一个实施例中,为了使第一用户面功能实体将目的地址为分流地址的业务流数据包分流给边缘网络,可以由第一用户面功能实体生成相应的分流策略,具体可以如图6所示,步骤S530具体包 括步骤S5301至步骤S5302,详细说明如下。
步骤S5301、若所述DNS响应消息中的域名信息满足预设条件,则将所述DNS响应消息中包含的网络地址作为分流地址。
在本发明的一个实施例中,DNS响应消息中的域名信息满足预设条件可以是域名信息满足目标域名的条件,而目标域名可以由运营商根据业务方的请求预先在第一用户面功能实体中配置并存储。通过判断DNS响应消息中的域名信息是否为目标域名,进而确定是否将DNS响应消息中携带的网络地址作为分流地址。若DNS响应消息中的域名信息为预先存储的目标域名时,则提取DNS响应消息中的网络地址,将该网络地址作为分流地址。若DNS响应消息中的域名信息不是预先存储的目标域名,则不提取DNS响应消息中的网络地址作为分流地址。
继续参见图6,在步骤S5302中,根据所述分流地址生成分流策略,所述分流策略用于将终端设备发送的目的地址为所述分流地址的业务流数据包分流至边缘网络。
在本发明的一个实施例中,当第一用户面功能实体根据DNS响应消息中包含的网络地址生成分流策略之后,包括以下两种后续处理方式:
第一种方式,当第一用户面功能实体生成分流策略之后,会将该分流策略上报给会话管理功能实体,以通知会话管理功能实体在第一用户面功能实体侧已经配置了分流策略,同时第一用户面功能实体将DNS响应消息发送给终端设备,以使终端设备根据该DNS响应消息发送业务流数据包。
第二种方式,当第一用户面功能实体生成分流策略之后,将该分流策略上报给会话管理功能实体,由会话管理功能实体对该分流策略的请求进行决策。在收到会话管理功能实体对分流策略的响应后,再发送DNS响应消息给终端设备。在该实施例中,在第一用户面功能实体生成分流策略之后,所述方法还包括:
发送所述分流策略至会话管理功能实体;
若接收到所述会话管理功能实体返回的同意所述分流策略的响应消息,则确定配置所述分流策略,并将所述DNS响应消息发送至所述终端设备,以使所述终端设备根据所述DNS响应消息发送业务流数据包。
在该实施例中,会话管理功能实体接收到第一用户面功能实体发送的分流策略上报消息后,根据运营商配置的策略对所述分流策略进行决策,如果会话管理功能实体同意第一用户面功能实体的分流策略的请求,则发送同意请求的响应消息至第一用户面功能实体,第一用户面功能实体接收到会话管理功能实体返回的同意分流策略的响应消息后,则配置所述的分流策略,并将该DNS响应消息发送至终端设备,以使终端设备根据该DNS响应消息发送业务流数据包。比如,终端设备可以向DNS响应消息中包含的网络地址(该网络地址是由DNS服务器向终端设备分配的业务调度器的网络地址)发送业务流数据包。
如果会话管理功能实体拒绝第一用户面功能实体的分流策略的请求,则发送拒绝请求的响应消息至第一用户面功能实体,第一用户面功能实体接收到会话管理功能实体返回的拒绝分流策略的响应消息后,仍然会将该DNS响应消息发送至终端设备,以使终端设备根据该DNS响应消息发送业务流数据包。但与会话管理功能实体同意第一用户面功能实体的分流策略的请求不同的是,在会话管理功能实体拒绝第一用户面功能实体的分流策略的情况下,第一用户面功能实体不会配置分流策略,所以在接收到终端设备发送的业务流数据包后,由于没有配置针对该业务流数据包的网络地址的分流策略,所以不会将该业务流数据包分流至边缘网络。而在会话管理功能实体同意第一用户面功能实体的分流策略的情况下,第一用户面功能实体会配置分流策略,所以在接收到终端设备发送的业务流数据包后,由于配置了针对该业务流数据包的网络地址的分流策略,则会根据该分流策略将该业务流数据包分流至边缘网络,然后由边缘计算设备发送至业务调度器,以便边缘计算设备通过修改业务流数据包的源地址来确保业务调度器识 别出该业务流数据包可以被边缘网络中的业务服务器服务,从而在返回的响应消息中分配该边缘网络中的业务服务器的网络地址。
在本发明的一个实施例中,第一用户面功能实体将DNS响应消息中包含的网络地址作为分流地址,生成分流策略之后,参见图7,所述方法还包括以下步骤:
步骤S710、根据所述分流策略将终端设备发送的目的地址为所述分流地址的业务流数据包分流至边缘网络,进而由所述边缘网络将所述业务流数据包发送至业务调度器,所述业务调度器的网络地址为所述业务流数据包的目的地址;
步骤S720、接收由所述边缘网络发送的业务调度器返回的响应消息,并将所述响应消息发送至所述终端设备,以使所述终端设备根据所述响应消息中包含的网络地址发起业务访问请求。
在步骤S710中,终端设备发送的业务流数据包携带有业务流数据包的目的地址,也即业务调度器的网络地址,当第一用户面功能实体接收到终端设备发送的业务流数据包后,第一用户面功能实体根据配置的分流策略,判断业务流数据包的目的地址是否为分流策略中的分流地址,若业务流数据包的目的地址不是分流策略中的分流地址,则该业务流数据包不会被分流至边缘网络,该业务流数据包会直接被发送至第二用户面功能实体,然后由第二用户面功能实体发送给业务调度器,业务调度器在未识别到边缘网络的情况下,会向终端设备分配位于核心数据中心的业务服务器的网络地址,以便终端设备向核心数据中心的业务服务器发起业务访问请求。若业务流数据包的目的地址是分流策略中的分流地址,则该业务流数据包会被分流至边缘网络,以便边缘网络通过修改业务流数据包中的源地址来确保业务调度器识别出有边缘网络能够处理终端设备的业务访问请求。
在本发明的一个实施例中,当第一用户面功能实体将业务流数据包分流至边缘网络之后,边缘网络中的边缘计算设备可以将该业务流数据包的源地址替换为边缘计算设备的网络地址并将修改后的业务流 数据包发送至业务调度器,进而业务调度器在接收到修改后的业务流数据包之后,可以根据其中的源地址(即边缘计算设备的网络地址)识别并根据业务部署情况判断该边缘计算设备所在的边缘网络是否能够处理终端设备的业务访问请求,如果业务调度器识别出边缘计算设备所在的边缘网络能够处理终端设备的业务访问请求,则生成的响应消息中携带该边缘网络中部署的业务服务器的网络地址;如果业务调度器识别出边缘计算设备所在的边缘网络不能处理终端设备的业务访问请求,则生成的响应消息中携带核心数据中心部署的业务服务器的网络地址。
继续参见图7,在步骤S720中,如果业务调度器识别出边缘网络中的业务服务器能够处理终端设备的业务访问请求,则生成包含边缘网络中的业务服务器的网络地址的响应消息,业务调度器在生成响应消息之后,可以将该响应消息发送给边缘网络中的边缘计算设备,进而边缘计算设备将该响应消息中的目的地址替换为终端设备的目的地址并发送给第一用户面功能实体。
在本发明的一个实施例中,如果业务调度器识别出边缘网络不能处理终端设备的业务访问请求,则生成包含核心数据中心的业务服务器的网络地址的响应消息,业务调度器在生成响应消息之后,可以将该响应消息发送给边缘网络中的边缘计算设备,并由边缘计算设备将该响应消息的目的地址替换为所述终端设备的网络地址后,发送给第一用户面功能实体,然后由该第一用户面功能实体发送该响应消息至终端设备,以便终端设备向核心数据中心的业务服务器发起业务访问请求。
上述实施例通过根据DNS服务器返回的DNS响应消息中的业务调度器的网络地址,生成分流策略,当终端设备向业务调度器发送业务流数据包时,根据分流策略将该业务流数据包转发至边缘网络,从而使得业务调度器根据获取到的边缘计算设备的网络地址所在的边缘网络判断该边缘网络是否可以服务该业务访问请求,并向终端设备返 回响应消息,终端设备能够根据响应消息中的网络地址发起业务访问请求。
本申请通过实时生成分流策略,进而实现通过边缘网络中的业务服务器向终端设备交付业务,满足了特定业务调度需求。同时,通过边缘网络中的业务服务器向终端设备交付业务不仅降低了终端设备访问业务的时延,而且减少了核心数据中心的带宽消耗。
在本发明的一个实施例中,第一用户面功能实体在将响应消息发送给终端设备之后,还可以包括如下步骤:
接收所述终端设备发送的上行业务流数据包;
若所述上行业务流数据包的目的地址和分流地址一致时,则将所述上行业务流数据包发送至所述边缘网络;
若接收到所述边缘网络返回的针对所述终端设备的下行业务流数据包,则将所述下行业务流数据包返回给所述终端设备。
在本发明的一个实施例中,第一用户面功能实体在将业务调度器返回的响应消息发送至终端设备之后,终端设备可以根据该响应消息中包含的网络地址发起业务访问请求,包含该业务访问请求的上行业务流数据包先被发送至第一用户面功能实体,当上行业务流数据包的目的地址与分流地址一致时,第一用户面功能实体会将该上行业务流数据包分流至边缘网络中的业务服务器。
在本发明的一个实施例中,第一用户面功能实体在接收到边缘网络返回的针对终端设备的下行业务流数据包后,则将下行业务流数据包返回给终端设备。分流设备(即第一用户面功能实体)接收到边缘网络返回的下行业务流数据包后,可以通过基站或非3GPP接入将该业务响应结果返回给终端设备。
图8示出了根据本申请一个实施例的业务分流***的交互过程示意图,包括如下步骤:
步骤S810、终端设备101向DNS服务器104发送DNS请求,该DNS请求用于向DNS服务器104请求分配业务调度器的网络地址。终 端设备101通过基站或非3GPP接入发送DNS请求至第一用户面功能实体102,第一用户面功能实体102转发该DNS请求至第二用户面功能实体103,进而由第二用户面功能实体103发送该DNS请求至DNS服务器104。
步骤S820、DNS服务器104根据DNS请求选择相应的业务调度器107,将业务调度器的网络地址写入DNS响应消息,并将DNS响应消息通过互联网发送至第二用户面功能实体103,然后由第二用户面功能实体103发送该DNS响应消息至第一用户面功能实体102。
步骤S830a、第一用户面功能实体102接收到该DNS响应消息后,根据预设规则对该DNS响应消息进行检测,若检测结果满足预设规则,则提取出DNS响应消息内包含的域名与对应的网络地址,如果域名信息满足第一用户面功能实体102预先设置规则,则提取DNS响应消息中的网络地址作为分流地址,生成分流策略,分流策略用于将目的地址与所述分流地址相匹配的业务流数据包转发至边缘网络。
步骤S830b、第一用户面功能实体102在生成分流策略之后,将生成的分流策略上报给会话管理功能实体106。在一种实施方式中,第一用户面功能实体102通知会话管理功能实体106已经配置分流策略;在另一种实施方式中,第一用户面功能实体102将分流策略发送至会话管理功能实体106,由会话管理功能实体106对该分流策略进行决策,如果同意该分流策略的请求,则发送同意请求的响应消息给第一用户面功能实体102,第一用户面功能实体102则确定配置分流策略;如果不同意该分流策略的请求,则发送拒绝请求的响应消息给第一用户面功能实体102,第一用户面功能实体102则确定不配置分流策略。
步骤S840、终端设备101接收第一用户面功能实体102返回的DNS响应消息。
步骤S850、终端设备101在接收到DNS响应消息后,根据DNS响应消息中包含的业务调度器107的网络地址发送业务流数据包。该业务流数据包首先由终端设备101发送至第一用户面功能实体102,第一用 户面功能实体102接收到该业务流数据包后,根据分流策略将该业务流数据包转发至边缘网络105,边缘网络105中的边缘计算设备将业务流数据包的源地址替换为边缘计算设备的网络地址并发送至业务调度器107,进而业务调度器107在接收到修改后的业务流数据包之后,可以根据其中的源地址(即边缘计算设备的网络地址)识别边缘计算设备所在的边缘网络105是否能够处理终端设备的业务访问请求。
步骤S860、业务调度器107根据业务流数据包生成响应消息,并通过边缘网络105返回响应消息至第一用户面功能实体102,进而由第一用户面功能实体102将响应消息返回至终端设备101。
在本发明的一个实施例中,如果业务调度器107识别出边缘计算设备所在的边缘网络105能够处理终端设备的业务访问请求,则生成包含该边缘网络105中的业务服务器的网络地址的响应消息,将该响应消息发送给边缘网络105,进而边缘网络105中的边缘计算设备将该响应消息中的目的地址替换为终端设备的目的地址并发送给第一用户面功能实体102。
在本发明的一个实施例中,如果业务调度器107识别出边缘计算设备所在的边缘网络105不能处理终端设备的业务访问请求,则生成的响应消息中携带核心数据中心部署的业务服务器的网络地址。
步骤S870、在将响应消息发送至终端设备101之后,终端设备101可以根据响应消息中包含的业务服务器的网络地址发起业务访问请求,包含该业务访问请求的上行业务流数据包先被发送至第一用户面功能实体102,由于该上行业务流数据包的目的地址和预先配置的分流地址一致,所以第一用户面功能实体102会将该上行业务流数据包分流至边缘网络105。
步骤S880、第一用户面功能实体102接收边缘网络105返回的针对所述终端设备101的下行业务流数据包,并将该下行业务流数据包返回给终端设备101。
如图9所示,根据本申请的一个实施例,提供了一种业务分流装置 900,其特征在于,所述装置900包括:
检测模块910,用于根据预设规则对接收到的业务流数据进行检测,得到检测结果;
生成模块920,用于若所述检测结果满足所述预设规则,则将所述业务流数据中的网络地址作为分流地址,生成分流策略;
分流模块930,用于根据所述分流策略将终端设备的上行业务流数据包分流至边缘网络,所述上行业务流数据包的目的地址为所述分流地址。
在本发明的一个实施例中,在所述生成模块920用于若所述检测结果满足所述预设规则,则将所述业务流数据中的网络地址作为分流地址,生成分流策略之后,所述装置还包括:
发送模块,用于将所述分流策略发送至会话管理功能实体;
确定配置模块,用于若接收到所述会话管理功能实体返回的同意所述分流策略的响应消息,则确定配置所述分流策略;
确定不配置模块,用于若接收到所述会话管理功能实体返回的拒绝所述分流策略的响应消息,则确定不配置所述分流策略。
在本发明的一个实施例中,在所述业务流数据为DNS响应消息时,其中,所述生成模块920包括:
接收单元,用于接收终端设备发送的DNS请求并将所述DNS请求发送至DNS服务器;
检测单元,用于对接收到的所述DNS服务器返回的DNS响应消息进行检测,得到检测结果;
生成单元,用于若所述检测结果满足预设规则,则将所述DNS响应消息中包含的网络地址作为分流地址,生成分流策略。
在本发明的一个实施例中,所述生成单元进一步用于:
若所述DNS响应消息中的域名信息满足预设条件,则将所述DNS响应消息中包含的网络地址作为分流地址;
根据所述分流地址生成分流策略,所述分流策略用于将终端设备发 送的目的地址为所述分流地址的业务流数据包分流至边缘网络。
在本发明的一个实施例中,在所述生成单元用于将所述DNS响应消息中包含的网络地址作为分流地址,生成分流策略之后,所述装置进一步用于:
将所述分流策略发送至会话管理功能实体;
若接收到所述会话管理功能实体返回的同意所述分流策略的响应消息,则确定配置所述分流策略,并将所述DNS响应消息发送至所述终端设备,以使所述终端设备根据所述DNS响应消息发送业务流数据包。
在本发明的一个实施例中,在所述生成单元用于将所述DNS响应消息中包含的网络地址作为分流地址,生成分流策略之后,所述装置进一步用于:
根据所述分流策略将终端设备发送的目的地址为所述分流地址的业务流数据包分流至边缘网络,进而由所述边缘网络将所述业务流数据包发送至业务调度器,所述业务调度器的网络地址为所述业务流数据包的目的地址;
接收由所述边缘网络发送的业务调度器返回的响应消息,并将所述响应消息发送至所述终端设备,以使所述终端设备根据所述响应消息中包含的网络地址发起业务访问请求。
在本发明的一个实施例中,将所述响应消息发送至所述终端设备之后,所述装置还用于:
接收所述终端设备发送的上行业务流数据包;
若所述上行业务流数据包的目的地址和分流地址一致时,则将所述上行业务流数据包发送至所述边缘网络;
若接收到所述边缘网络返回的针对所述终端设备的下行业务流数据包,则将所述下行业务流数据包返回给所述终端设备。
图10示出了适于用来实现本公开的任一实施例的电子设备的计算机***的结构示意图。
需要说明的是,图10示出的电子设备的计算机***1000仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图10所示,计算机***1000包括处理器或中央处理单元(CPU)1001,其可以根据存储在只读存储器(ROM)(也称为存储部分)1008中的程序或者从存储部分1008加载到随机访问存储器(RAM)1003中的程序而执行各种适当的动作和处理,例如执行本公开的任一实施例所述的方法。在RAM 1003中,还存储有***操作所需的各种程序和数据。CPU 1001、ROM 1008以及RAM 1003通过总线1004彼此相连。输入/输出(I/O)接口1005也连接至总线1004。
以下部件连接至I/O接口1005:包括键盘、鼠标等的输入部分1006;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分1007;包括硬盘等的存储部分1008;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分1009。通信部分1009经由诸如因特网的网络执行通信处理。驱动器1010也根据需要连接至I/O接口1005。可拆卸介质1011,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器1010上,以便于从其上读出的计算机程序根据需要被安装入存储部分1008。
特别地,根据本公开的实施例,下文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分1009从网络上被下载和安装,和/或从可拆卸介质1011被安装。在该计算机程序被处理器或中央处理单元(CPU)1001执行时,执行本申请的方法和装置中限定的各种功能。在一些实施例中,计算机***1000还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
需要说明的是,本公开所示的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、 或半导体的***、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行***、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行***、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、RF等等,或者上述的任意合适的组合。
本申请的实施例提供的技术方案可以包括以下有益效果:用户面功能实体根据预设规则对接收到的业务流数据进行检测,若检测结果满足预设规则,则将业务流数据中的网络地址作为分流地址,生成分流策略,根据分流策略将终端设备的上行业务流数据包分流至边缘网络。可见,本发明实施例的技术方案通过用户面功能实体对业务流的检测来实时生成分流策略,实现灵活、实时的分流策略的配置,满足特定的业务需求。比如,在本申请实施例提供的业务调度的技术方案中,可以通过用户面功能实体检测DNS响应消息,当满足预设规则时,则把DNS响应消息中相应的地址配置为分流地址,从而把上行业务流数据分流至边缘计算设备进行特定处理,满足特定的对在边缘网络部署的业务的调度需求。
附图中的流程图和框图,图示了按照本公开各种实施例的***、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上, 流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,上述模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图或流程图中的每个方框、以及框图或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的***来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现,所描述的单元也可以设置在处理器中。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定。
作为另一方面,本申请还提供了一种计算机可读介质,该计算机可读介质可以是上述实施例中描述的电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被一个该电子设备执行时,使得该电子设备实现如下述实施例中所述的方法。例如,所述的电子设备可以实现如图3至图7所示的各个步骤等。

Claims (12)

  1. 一种业务分流方法,所述方法包括:
    根据预设规则对接收到的业务流数据进行检测,得到检测结果;
    若所述检测结果满足所述预设规则,则将所述业务流数据中的网络地址作为分流地址,生成分流策略;
    根据所述分流策略将终端设备的上行业务流数据包分流至边缘网络,所述上行业务流数据包的目的地址为所述分流地址。
  2. 根据权利要求1所述的方法,所述将所述业务流数据中的网络地址作为分流地址,生成分流策略之后,该方法还包括:
    将所述分流策略发送至会话管理功能实体;
    若接收到所述会话管理功能实体返回的同意所述分流策略的响应消息,则确定配置所述分流策略;
    若接收到所述会话管理功能实体返回的拒绝所述分流策略的响应消息,则确定不配置所述分流策略。
  3. 根据权利要求1所述的方法,所述业务流数据为DNS响应消息,其中,所述若所述检测结果满足所述预设规则,则将所述业务流数据中的网络地址作为分流地址,生成分流策略,包括:
    接收所述终端设备发送的DNS请求并将所述DNS请求发送至DNS服务器;
    对接收到的所述DNS服务器返回的DNS响应消息进行检测,得到检测结果;
    若所述检测结果满足预设规则,则将所述DNS响应消息中包含的网络地址作为分流地址,生成分流策略。
  4. 根据权利要求3所述的方法,所述将所述DNS响应消息中包含的网络地址作为分流地址,生成分流策略,包括:
    若所述DNS响应消息中的域名信息满足预设条件,则将所述DNS响应消息中包含的网络地址作为分流地址;
    根据所述分流地址生成分流策略,所述分流策略用于将所述终端 设备发送的目的地址为所述分流地址的业务流数据包分流至所述边缘网络。
  5. 根据权利要求3所述的方法,所述将所述DNS响应消息中包含的网络地址作为分流地址,生成分流策略之后,该方法还包括:
    将所述分流策略发送至会话管理功能实体;
    若接收到所述会话管理功能实体返回的同意所述分流策略的响应消息,则确定配置所述分流策略,并将所述DNS响应消息发送至所述终端设备,以使所述终端设备根据所述DNS响应消息发送业务流数据包。
  6. 根据权利要求3所述的方法,所述将所述DNS响应消息中包含的网络地址作为分流地址,生成分流策略之后,该方法还包括:
    根据所述分流策略将所述终端设备发送的目的地址为所述分流地址的业务流数据包分流至所述边缘网络,进而由所述边缘网络将所述业务流数据包发送至业务调度器,所述业务调度器的网络地址为所述业务流数据包的目的地址;
    接收由所述边缘网络发送的业务调度器返回的响应消息,并将所述响应消息发送至所述终端设备,以使所述终端设备根据所述响应消息中包含的网络地址发起业务访问请求。
  7. 根据权利要求6所述的方法,所述将所述响应消息发送至所述终端设备之后,该方法还包括:
    接收所述终端设备发送的上行业务流数据包;
    若所述上行业务流数据包的目的地址和分流地址一致时,则将所述上行业务流数据包发送至所述边缘网络;
    若接收到所述边缘网络返回的针对所述终端设备的下行业务流数据包,则将所述下行业务流数据包返回给所述终端设备。
  8. 一种业务分流装置,包括:
    检测模块,用于根据预设规则对接收到的业务流数据进行检测,得到检测结果;
    生成模块,用于若所述检测结果满足所述预设规则,则将所述业务流数据中的网络地址作为分流地址,生成分流策略;
    分流模块,用于根据所述分流策略将终端设备的上行业务流数据包分流至边缘网络,所述上行业务流数据包的目的地址为所述分流地址。
  9. 根据权利要求8所述的装置,还包括:
    发送模块,用于将所述分流策略发送至会话管理功能实体;
    确定配置模块,用于若接收到所述会话管理功能实体返回的同意所述分流策略的响应消息,则确定配置所述分流策略;
    确定不配置模块,用于若接收到所述会话管理功能实体返回的拒绝所述分流策略的响应消息,则确定不配置所述分流策略。
  10. 一种业务分流***,包括终端设备、用户面功能实体、会话管理功能实体和边缘网络,其中:
    所述终端设备,用于将上行业务流数据包发送至所述用户面功能实体,接收所述用户面功能实体发送的下行业务流数据包;
    所述用户面功能实体,用于根据预设规则,对接收到的业务流数据进行检测,得到检测结果,若所述检测结果满足所述预设规则,则将所述业务流数据中的网络地址作为分流地址,生成分流策略,根据所述分流策略将所述终端设备的上行业务流数据包分流至边缘网络,所述上行业务流数据包的目的地址为所述分流地址;
    所述会话管理功能实体,用于接收所述用户面功能实体发送的分流策略,并向所述用户面功能实体返回同意或拒绝所述分流策略的响应消息;
    所述边缘网络,用于接收所述用户面功能实体分流的上行业务流数据包,并将下行业务流数据包发送给所述用户面功能实体。
  11. 一种电子设备,包括存储器和处理器;
    所述存储器中存储有计算机程序;
    所述处理器用于执行所述计算机程序以使所述电子设备实现权利 要求1至7中的任一项所述的方法。
  12. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时使包括所述处理器的电子设备实现权利要求1至7中的任一项所述的方法。
PCT/CN2020/120325 2019-11-08 2020-10-12 业务分流方法、装置及***以及电子设备和存储介质 WO2021088592A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2022518308A JP7427082B2 (ja) 2019-11-08 2020-10-12 サービスオフロード方法、装置、システム、電子機器、及びコンピュータプログラム
EP20885398.6A EP3985931A4 (en) 2019-11-08 2020-10-12 SERVICE FLOW DIVISION METHOD, APPARATUS AND SYSTEM, ELECTRONIC DEVICE AND STORAGE MEDIA
KR1020227008280A KR20220039814A (ko) 2019-11-08 2020-10-12 서비스 흐름 분할 방법, 장치, 및 시스템, 전자 디바이스, 및 저장 매체
US17/451,746 US20220038378A1 (en) 2019-11-08 2021-10-21 Service offloading method, apparatus, and system, electronic device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911089188.0A CN110912835B (zh) 2019-11-08 2019-11-08 业务分流方法、装置及***
CN201911089188.0 2019-11-08

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/451,746 Continuation US20220038378A1 (en) 2019-11-08 2021-10-21 Service offloading method, apparatus, and system, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
WO2021088592A1 true WO2021088592A1 (zh) 2021-05-14

Family

ID=69816934

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/120325 WO2021088592A1 (zh) 2019-11-08 2020-10-12 业务分流方法、装置及***以及电子设备和存储介质

Country Status (6)

Country Link
US (1) US20220038378A1 (zh)
EP (1) EP3985931A4 (zh)
JP (1) JP7427082B2 (zh)
KR (1) KR20220039814A (zh)
CN (1) CN110912835B (zh)
WO (1) WO2021088592A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113938957A (zh) * 2021-12-06 2022-01-14 太平洋电信股份有限公司 网络边缘设备的计算分配方法及***

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110290140B (zh) * 2019-06-28 2021-09-24 腾讯科技(深圳)有限公司 多媒体数据处理方法及装置、存储介质、电子设备
CN110912835B (zh) * 2019-11-08 2023-04-07 腾讯科技(深圳)有限公司 业务分流方法、装置及***
CN113473526A (zh) * 2020-03-31 2021-10-01 华为技术有限公司 一种通信方法及装置
CN113949617A (zh) * 2020-07-16 2022-01-18 中移(成都)信息通信科技有限公司 一种组网***、方法、设备及计算机存储介质
CN112312481B (zh) * 2020-09-25 2022-06-21 网络通信与安全紫金山实验室 一种mec与多运营商核心网的通信方法及***
CN115118786B (zh) * 2021-03-22 2024-03-19 中国电信股份有限公司 边缘业务调度方法、装置和***、存储介质
CN113596191B (zh) * 2021-07-23 2023-05-26 腾讯科技(深圳)有限公司 一种数据处理方法、网元设备以及可读存储介质
US20230110752A1 (en) * 2021-10-13 2023-04-13 Microsoft Technology Licensing, Llc Efficiency of routing traffic to an edge compute server at the far edge of a cellular network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108882305A (zh) * 2017-05-09 2018-11-23 ***通信有限公司研究院 一种数据包的分流方法及装置
CN109429270A (zh) * 2017-06-23 2019-03-05 华为技术有限公司 通信方法及装置
US20190158997A1 (en) * 2016-05-06 2019-05-23 Convida Wireless, Llc Traffic steering at the service layer
CN109889586A (zh) * 2019-02-02 2019-06-14 腾讯科技(深圳)有限公司 通信处理方法、装置、计算机可读介质及电子设备
WO2019186504A1 (en) * 2018-03-29 2019-10-03 Telefonaktiebolaget Lm Ericsson (Publ) Methods for support of user plane separation and user plane local offloading for 5g non-3gpp access
CN110912835A (zh) * 2019-11-08 2020-03-24 腾讯科技(深圳)有限公司 业务分流方法、装置及***

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1758649B (zh) * 2004-10-05 2010-04-28 华为技术有限公司 版本不同的网间互联协议网络互通的方法
WO2008112770A2 (en) * 2007-03-12 2008-09-18 Citrix Systems, Inc. Systems and methods for cache operations
US7706266B2 (en) * 2007-03-12 2010-04-27 Citrix Systems, Inc. Systems and methods of providing proxy-based quality of service
US8782207B2 (en) 2009-10-20 2014-07-15 At&T Intellectual Property I, L.P. System and method to prevent endpoint device recovery flood in NGN
EP2630776B1 (en) * 2010-10-22 2019-07-10 Telefonaktiebolaget LM Ericsson (publ) Mobile-access information based adaptation of network address lookup for differentiated handling of data traffic
US10587503B2 (en) * 2016-04-08 2020-03-10 Apple Inc. User-plane path selection for the edge service
CN108419270B (zh) * 2017-02-10 2021-08-06 中兴通讯股份有限公司 一种业务分流实现方法及装置
CN113194157B (zh) * 2017-06-30 2022-10-28 华为技术有限公司 一种应用实例地址的转换方法和装置
EP3652981B1 (en) * 2017-08-14 2022-04-13 Samsung Electronics Co., Ltd. Method and apparatus for processing anchor user plane function (upf) for local offloading in 5g cellular network
JP6999931B2 (ja) * 2018-01-10 2022-01-19 株式会社国際電気通信基礎技術研究所 通信方法、通信システム、mecサーバ、dnsサーバ、および、トラフィック誘導ルータ
CN110099010B (zh) * 2018-01-31 2021-08-03 华为技术有限公司 一种业务分流的方法和装置
CN108306971B (zh) * 2018-02-02 2020-06-23 网宿科技股份有限公司 一种发送数据资源的获取请求的方法和***
US10848974B2 (en) * 2018-12-28 2020-11-24 Intel Corporation Multi-domain trust establishment in edge cloud architectures
CN112512090B (zh) * 2019-03-15 2022-07-19 腾讯科技(深圳)有限公司 通信处理方法、装置、计算机可读介质及电子设备
CN110198363B (zh) * 2019-05-10 2021-05-18 深圳市腾讯计算机***有限公司 一种移动边缘计算节点的选择方法、装置及***
US11245717B1 (en) * 2019-09-27 2022-02-08 Amazon Technologies, Inc. Automated detection, alarming, and removal of subdomain takeovers

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190158997A1 (en) * 2016-05-06 2019-05-23 Convida Wireless, Llc Traffic steering at the service layer
CN108882305A (zh) * 2017-05-09 2018-11-23 ***通信有限公司研究院 一种数据包的分流方法及装置
CN109429270A (zh) * 2017-06-23 2019-03-05 华为技术有限公司 通信方法及装置
WO2019186504A1 (en) * 2018-03-29 2019-10-03 Telefonaktiebolaget Lm Ericsson (Publ) Methods for support of user plane separation and user plane local offloading for 5g non-3gpp access
CN109889586A (zh) * 2019-02-02 2019-06-14 腾讯科技(深圳)有限公司 通信处理方法、装置、计算机可读介质及电子设备
CN110912835A (zh) * 2019-11-08 2020-03-24 腾讯科技(深圳)有限公司 业务分流方法、装置及***

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: "3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; 5G enhanced mobile broadband; Media distribution (Release 15)", 3GPP STANDARD; TECHNICAL SPECIFICATION; 3GPP TR 26.891, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. SA WG4, no. V1.2.0, 6 July 2018 (2018-07-06), Mobile Competence Centre ; 650, route des Lucioles ; F-06921 Sophia-Antipolis Cedex ; France, pages 1 - 40, XP051475106 *
ERICSSON: "3GPP TSG-CT WG3 Meeting #105 C3-193085", ADD DN-AAA RE-AUTHENTICATION, 30 August 2019 (2019-08-30), XP051763404 *
See also references of EP3985931A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113938957A (zh) * 2021-12-06 2022-01-14 太平洋电信股份有限公司 网络边缘设备的计算分配方法及***

Also Published As

Publication number Publication date
EP3985931A4 (en) 2022-08-31
CN110912835A (zh) 2020-03-24
JP7427082B2 (ja) 2024-02-02
JP2022550517A (ja) 2022-12-02
CN110912835B (zh) 2023-04-07
EP3985931A1 (en) 2022-04-20
US20220038378A1 (en) 2022-02-03
KR20220039814A (ko) 2022-03-29

Similar Documents

Publication Publication Date Title
WO2021088592A1 (zh) 业务分流方法、装置及***以及电子设备和存储介质
US11563713B2 (en) Domain name server allocation method and apparatus
CN110769039B (zh) 资源调度方法及装置、电子设备和计算机可读存储介质
WO2021218397A1 (zh) 用于实现业务连续性的方法及相关设备
WO2020259509A1 (zh) 一种应用迁移方法及装置
WO2023000935A1 (zh) 一种数据处理方法、网元设备以及可读存储介质
US20200323029A1 (en) Session Processing Method and Apparatus
WO2020103523A1 (zh) 一种网络切片的选择方法、网络设备及终端
WO2015165312A1 (zh) 业务链负载均衡方法及其装置、***
WO2023000940A1 (zh) 数据处理方法、装置、网元设备、存储介质及程序产品
WO2018233451A1 (zh) 通信方法、装置和***
US20230156828A1 (en) Session establishment method and apparatus, system, and computer storage medium
WO2023000936A1 (zh) 一种数据处理方法、网元设备以及可读存储介质
JP2020506629A (ja) ルーティング方法および装置
WO2018129665A1 (zh) 通信方法、网络开放功能网元和控制面网元
WO2018090800A1 (zh) 连接建立方法、设备及***
CN112954768A (zh) 通信方法、装置及***
CN108092787B (zh) 一种缓存调整方法、网络控制器及***
CN114629912A (zh) 基于mec的通信传输方法及装置
US11265931B2 (en) Method and device for establishing connection
CN113068223B (zh) 基于切片信息的本地分流方法、装置、设备及存储介质
WO2022057724A1 (zh) 数据分流方法和装置
KR20170099710A (ko) 분산 클라우드 환경에서 서비스 품질을 보장하는 전용망 서비스 제공 장치 및 방법
US20230144568A1 (en) Application-aware bgp path selection and forwarding
JP5624112B2 (ja) 無線ローカルエリアネットワークにおけるサービス品質制御

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20885398

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020885398

Country of ref document: EP

Effective date: 20220117

ENP Entry into the national phase

Ref document number: 20227008280

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2022518308

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE