CN110611937B - Data distribution method and device, edge data center and readable storage medium - Google Patents

Data distribution method and device, edge data center and readable storage medium Download PDF

Info

Publication number
CN110611937B
CN110611937B CN201810613236.0A CN201810613236A CN110611937B CN 110611937 B CN110611937 B CN 110611937B CN 201810613236 A CN201810613236 A CN 201810613236A CN 110611937 B CN110611937 B CN 110611937B
Authority
CN
China
Prior art keywords
processing
data packet
data center
uplink data
local edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810613236.0A
Other languages
Chinese (zh)
Other versions
CN110611937A (en
Inventor
张凌
严丽云
杨新章
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN201810613236.0A priority Critical patent/CN110611937B/en
Publication of CN110611937A publication Critical patent/CN110611937A/en
Application granted granted Critical
Publication of CN110611937B publication Critical patent/CN110611937B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • H04L67/146Markers for unambiguous identification of a particular session, e.g. session cookie or URL-encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The disclosure relates to a data distribution method, a data distribution device, an edge data center and a readable storage medium, and relates to the technical field of communication. The method of the present disclosure comprises: receiving an uplink data packet; searching a corresponding application label according to an application identifier corresponding to the uplink data packet; according to the application label, determining to send the uplink data packet to a local edge data center for processing or to an upper network for processing; the application label comprises an identifier which is used for preferentially sending the corresponding uplink data packet to the local edge data center for processing, or an identifier which is preferentially sent to the upper network for processing. The scheme disclosed by the invention realizes the shunting of the uplink data packets, and the burden of an upper network is shared by utilizing the edge data center, so that the overall data processing efficiency is improved, and the user experience is improved.

Description

Data distribution method and device, edge data center and readable storage medium
Technical Field
The present disclosure relates to the field of communications technologies, and in particular, to a data offloading method and apparatus, an edge data center, and a readable storage medium.
Background
The rapid development Of mobile communication has promoted the continuous emergence Of various novel services, and besides the traditional mobile broadband, mobile communication has brought forth many new application fields such as AR (Augmented Reality technology), VR (Virtual Reality technology), internet Of vehicles, industrial control, IOT (Internet Of Things), etc., and has brought higher demands on the performance such as network bandwidth and time delay, and the network load is further increased.
In a traditional communication network architecture, uplink data packets at an access network side all pass through an upper core network and then access the internet to reach a corresponding service server, so that uplink data is controlled and managed.
Disclosure of Invention
The inventor finds that: in the face of the increase of mass data and the improvement of user requirements, the scheme that all uplink data packets at the access network side are processed by the core network causes severe burden to core network equipment, reduces the data processing efficiency, and is difficult to meet the requirements of users on high speed and low time delay.
One technical problem to be solved by the present disclosure is: how to improve the processing efficiency of the uplink data packet.
According to some embodiments of the present disclosure, a data offloading method is provided, including: receiving an uplink data packet; searching a corresponding application label according to the application identifier corresponding to the uplink data packet; according to the application label, determining to send the uplink data packet to a local edge data center for processing or to an upper network for processing; the application label comprises an identifier which is used for preferentially sending the corresponding uplink data packet to the local edge data center for processing, or an identifier which is preferentially sent to the upper network for processing.
In some embodiments, sending the upstream data packet to the local edge data center for processing includes: and sending the uplink data packet to a local edge data center, calculating or storing the content in the uplink data packet, or scheduling corresponding resources according to the request information of the uplink data packet.
In some embodiments, the method further comprises: and modifying the application label corresponding to the application identifier according to the frequency of the uplink data packet which is correspondingly sent to the local edge data center for processing and is scheduled by the heterogeneous network within the first preset time.
In some embodiments, when a frequency of a packet corresponding to an application identifier and sent to a local edge data center for processing reaches a first threshold within a first preset time, an application tag corresponding to the application identifier is preferentially sent to the local edge data center for processing from a corresponding uplink packet, and is modified into an identifier that the corresponding uplink packet is preferentially sent to an upper network for processing.
In some embodiments, the method further comprises: and under the condition that the content in the uplink data packet is cached in the local edge data center, sending the content in the uplink data packet to the different domain data center for caching according to the frequency of the content in the uplink data packet scheduled by the different domain data center in the second preset time.
In some embodiments, when the frequency of the content in the uplink data packet scheduled by the heterogeneous data center within the second preset time reaches the second threshold, the content in the uplink data packet is sent to the heterogeneous data center for caching.
In some embodiments, determining to send the uplink data packet to the local edge data center for processing or to an upper network for processing according to the application tag includes: determining to send the uplink data packet to the local edge data center for processing or to an upper network for processing according to the application label and the load of the local edge data center; or, according to the application label and the load of the upper network, determining to send the uplink data packet to the local edge data center for processing or to the upper network for processing; or, according to the application label, the load of the local edge data center and the load of the upper network, determining to send the uplink data packet to the local edge data center for processing or to the upper network for processing.
According to other embodiments of the present disclosure, there is provided a data offloading device, including: the receiving module is used for receiving the uplink data packet; the label management module is used for searching a corresponding application label according to the application identifier corresponding to the uplink data packet; the processing mode determining module is used for determining to send the uplink data packet to the local edge data center for processing or to an upper network for processing according to the application label; the application label comprises an identifier which is used for preferentially sending the corresponding uplink data packet to the local edge data center for processing, or an identifier which is preferentially sent to the upper network for processing.
In some embodiments, the content in the upstream data packet sent to the local edge data center is calculated or stored, or the corresponding resource is scheduled according to the request information of the upstream data packet.
In some embodiments, the apparatus further comprises: and the label modification module is used for modifying the application label corresponding to the application identifier according to the frequency of the uplink data packet which is sent to the local edge data center for processing and is scheduled by the heterogeneous network within a first preset time.
In some embodiments, the tag modification module is configured to, when a frequency of a packet corresponding to an application identifier and sent to the local edge data center for processing reaches a first threshold within a first preset time, preferentially send the application tag corresponding to the application identifier to the identifier of the local edge data center for processing from the corresponding uplink packet, and modify the identifier of the corresponding uplink packet that is preferentially sent to the upper network for processing.
In some embodiments, the apparatus further comprises: and the content scheduling module is used for sending the content in the uplink data packet to the different domain data center for caching according to the frequency of the content in the uplink data packet scheduled by the different domain data center within the second preset time under the condition that the content in the uplink data packet is cached in the local edge data center.
In some embodiments, the content scheduling module is configured to send the content in the uplink data packet to the heterogeneous data center for caching when a frequency of the content in the uplink data packet scheduled by the heterogeneous data center within a second preset time reaches a second threshold.
In some embodiments, the processing mode determining module is configured to determine to send the uplink data packet to the local edge data center for processing or to send the uplink data packet to an upper network for processing according to the application tag and a load of the local edge data center; or, according to the application label and the load of the upper network, determining to send the uplink data packet to the local edge data center for processing or to the upper network for processing; or, according to the application label, the load of the local edge data center and the load of the upper network, determining to send the uplink data packet to the local edge data center for processing or to the upper network for processing.
According to still other embodiments of the present disclosure, a data offloading device is provided, including: a memory; and a processor coupled to the memory, the processor configured to perform the data offloading method of any of the preceding embodiments based on instructions stored in the memory device.
According to still further embodiments of the present disclosure, a computer-readable storage medium is provided, on which a computer program is stored, wherein the program, when executed by a processor, implements the steps of the data offloading method of any of the foregoing embodiments.
According to still further embodiments of the present disclosure, there is provided an edge data center including: the data distribution device of any of the foregoing embodiments; and the data processing module is used for receiving the uplink data packet sent by the data distribution device and processing the uplink data packet.
The method and the device are used for marking the uplink data packet corresponding to the application to be sent to the local edge data center for processing or to be sent to the upper network for processing by setting the application label aiming at different applications. And after receiving the uplink data packet, searching for a corresponding application label according to the application identifier, and further determining to send the uplink data packet to a local edge data center for processing or to an upper network for processing according to the application label. The scheme disclosed by the invention realizes the shunting of the uplink data packet, and shares the burden of an upper network by utilizing the edge data center, thereby improving the overall data processing efficiency and improving the user experience.
Other features of the present disclosure and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 illustrates a flow diagram of a data offloading method of some embodiments of the disclosure.
Fig. 2 shows a flow chart of a data offloading method according to another embodiment of the disclosure.
Fig. 3 shows a schematic structural diagram of a data offloading device according to some embodiments of the disclosure.
Fig. 4 shows a schematic structural diagram of a data offloading device according to another embodiment of the disclosure.
Fig. 5 shows a schematic structural diagram of a data offloading device according to still other embodiments of the disclosure.
Fig. 6 shows a schematic structural diagram of a data offloading device according to still other embodiments of the disclosure.
Fig. 7 illustrates a structural schematic of an edge data center of some embodiments of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
In the face of the coming of the 5G era and the requirement of processing mass data with high speed and low time delay, the present disclosure proposes a data offloading method, which is described below with reference to fig. 1.
Fig. 1 is a flow chart of some embodiments of the disclosed data offloading method. As shown in fig. 1, the method of this embodiment includes: steps S102 to S106.
In step S102, an upstream packet is received.
The uplink data packet may be an uplink data packet transmitted by the access network. The uplink data packet may be a data packet for uploading data to the application server, for example, uploading a live video, uploading data acquired by a sensor in the internet of things, and the like. The uplink packet may also be a packet requesting a resource from the application server, for example, a packet requesting a video or audio resource. The uplink data packet may also be uplink data or the like that transmits authentication data to the core network and interacts with the core network device.
In step S104, a corresponding application tag is searched according to the application identifier corresponding to the uplink data packet.
The applications may include not only applications provided by service providers such as video and games, but also services provided by operators, and may also be understood as applications such as authentication and billing, which are not limited to the examples given.
The application identification may be a unique number of the application. The application identifier may also be destination address information obtained by analyzing the packet header, for example, a destination IP address, a destination port number, and the like may identify an identifier of a destination server of the uplink data packet, in this case, data packets with different functions of the same application may correspond to different application tags, for example, control, authentication, and the like may correspond to different destination address information corresponding to data packets that need to be uniformly controlled and scheduled by uplink to an upper cloud data center and destination address information corresponding to other data packets processed through a local edge data center may be different, and further, the application identifier and the application tag may be different. Further, different application tags may be set for different users of the same application, in which case the application identifier may include source address information and destination address information, such as a source IP address and a destination IP address, or a source port number and a destination port number.
The mapping table of the application identifier and the application label can be established and stored, and when the uplink data packet arrives, the application label corresponding to the application identifier is directly searched in the mapping table. The application label comprises an identifier which is used for preferentially sending the corresponding uplink data packet to the local edge data center for processing, or an identifier which is preferentially sent to the upper network for processing. For example, the application tag may occupy 1bit, where 0 indicates that the corresponding uplink data packet is sent to the local edge data center for processing, and 1 indicates that the corresponding uplink data packet is sent to the upper network for processing, which is not limited to the illustrated example.
When a new application is opened, in addition to the existing standard attributes such as the identification of a client code, an application type, a life cycle and the like, an application tag of the application can be added to identify the localization attribute of a data packet corresponding to the application, that is, whether the processing is performed in a local edge data center or an upper network.
In step S106, it is determined to send the uplink data packet to the local edge data center for processing or to the upper network for processing according to the application tag.
The sending of the uplink data packet to the local edge data center for processing may include not only scheduling corresponding resources according to the request information of the uplink data packet, but also calculating or storing the content in the uplink data packet. For example, the local edge data center may cache data (for example, may cache hot videos, etc.), and when a user sends a corresponding uplink request, the resource is directly acquired from the local, so as to improve the efficiency of replying to the user. The local edge data center can also calculate the received uplink data in real time, and some application scenarios, for example, calculate the data collected in the internet of vehicles in real time and provide road condition information with ultra-low time delay for vehicles in an area.
The upper network can be a core network, an aggregation network and the like, and can be a cloud data center and the like in the face of wide application of the cloud technology in the 5G era. The edge data center can be closer to the user side, for example, an intelligent factory can be covered, and the data collected by various sensors in the factory is calculated by applying the technology of the internet of things, so that the automatic production of the whole factory is realized. The edge data center is more intelligent, the uplink data of the user side does not need to be transmitted to an upper core network or a cloud end for processing and then returned, and the computing capacity and the bandwidth of the edge data center are directly utilized, so that the high-efficiency processing of the data can be realized.
The method of the embodiment is used for identifying that the uplink data packet corresponding to the application is sent to the local edge data center for processing or sent to the upper network for processing by setting the application tag for different applications. And after receiving the uplink data packet, searching for a corresponding application label according to the application identifier, and further determining to send the uplink data packet to a local edge data center for processing or to an upper network for processing according to the application label. The scheme of the embodiment realizes the shunting of the uplink data packets, and the burden of an upper network is shared by the edge data center, so that the overall data processing efficiency is improved, and the user experience is improved.
The application label can directly identify that the corresponding uplink data packet is sent to the local edge data center for processing or sent to the upper network for processing, and can also correspond to different priorities. For example, the application tag may be configured to be a priority identifier for sending the corresponding uplink data packet to the local edge data center for processing, and correspondingly, the higher the priority for sending the uplink data packet to the local edge data center for processing, the lower the priority for sending the uplink data packet to the upper network for processing. In this case, determining to send the uplink data packet to the local edge data center for processing or to the upper network for processing according to the application tag (step S106) may include: and determining to send the uplink data packet to the local edge data center for processing or to the upper network for processing according to the application label and the load of the local edge data center, or according to the application label and the load of the upper network, or according to the application label, the load of the local edge data center and the load of the upper network. The load includes, for example: the data volume, resource occupancy rate, bandwidth occupancy rate, throughput and the like of the current processing can measure the indexes of the processing capacity of the local edge data center or the upper network.
Under the condition that the uplink data packet is determined to be sent to the local edge data center for processing or sent to the upper network for processing according to the application label and the load of the local edge data center, the load grades of different local edge data centers can be preset and respectively correspond to different application labels, and when the load of the local edge data center reaches a certain load grade, the uplink data packet of the corresponding application label is sent to the upper network for processing. And under the condition that the load of the local edge data center is larger, sending the data packet with higher priority to the local edge data center for processing, wherein the data packet with higher priority is processed in the local edge data center with higher probability.
For example, the load level is classified into 0 to 7, and a larger value indicates a heavier load, and correspondingly, the priority level of the uplink data packet that the application tag is configured to transmit to the local edge data center for processing is classified into 0 to 7, and a larger value indicates a higher priority level. And if the load grade reaches 3 grades, correspondingly sending the uplink data packets with the application labels of 0-3 priority levels to an upper network for processing, and sending the uplink data packets with the application labels of 4-7 priority levels to a local edge data center for processing.
Under the condition that the uplink data packet is determined to be sent to the local edge data center for processing or sent to the upper network for processing according to the application label and the load of the upper network, the load grades of different upper networks can be preset and respectively correspond to different application labels, and when the load of the upper network reaches a certain load grade, the uplink data packet of the corresponding application label is sent to the local edge data center for processing. For example, the load level is classified into 0 to 7 levels, and a larger value indicates a heavier load, and correspondingly, the priority level at which the application tag is configured to transmit the corresponding uplink data packet to the upper network for processing is classified into 0 to 7 levels, and a larger value indicates a higher priority level. And if the load grade reaches 7 grades, all the uplink data packets applying the labels 0-7 are correspondingly sent to the local edge data center for processing.
Under the condition that the uplink data packet is determined to be sent to the local edge data center for processing or sent to the upper network for processing according to the application tag, the load of the local edge data center and the load of the upper network, the uplink data packet can be determined to be sent to the local edge data center for processing or sent to the upper network for processing according to the ratio of the load of the local edge data center to the load of the upper network and the corresponding application tag. For example, different thresholds may be set according to a ratio of a load of the local edge data center to a load of the upper network, and when the ratio of the load of the local edge data center to the load of the upper network reaches a certain threshold, the uplink data packet of the corresponding application tag is sent to the upper network for processing. The larger the ratio of the load of the local edge data center to the load of the upper network is, more uplink data packets are sent to the upper network for processing.
When an application initially accesses the local edge data center, an application tag value may be carried by default, and before data of all services of the application is uplinked to the local edge data center processing unit, the application tag value is judged first or the load of the local edge data center and the load of an upper network are combined, and the uplink data packet localization attribute is inclined to preferentially enter the local edge data center processing unit or preferentially continue to be transmitted upwards. For example, the default tag value of an application is 7, all upstream packets of the application can be judged to have the lowest localization attribute after arriving, and all packets of the application will be preferentially transmitted to the upper network to enter the cloud data center, and no longer occupy the processing resources of the local edge data center. The initial value of the application label may be given by the application provider or operator when the application service is opened, for example, 0, indicating that the application is prone to be processed in the local edge data center of the local access, and the priority of the application is the lowest continuously up to the two-layer switching network or even the upper cloud data center.
The application label can be set initially according to experience, and can be dynamically adjusted subsequently according to the scheduling condition of the corresponding uplink data packet. Described below in conjunction with fig. 2.
Fig. 2 is a flow chart of some embodiments of the disclosed data offloading method. As shown in fig. 1, the method of this embodiment includes: steps S202 to S208.
In step S202, an upstream packet is received.
In step S204, the corresponding application tag is searched according to the application identifier corresponding to the uplink data packet.
In step S206, it is determined to send the uplink data packet to the local edge data center for processing or to the upper network for processing according to the application tag.
In step S208, the application label corresponding to the application identifier is modified according to the frequency of the uplink data packet, which is sent to the local edge data center for processing and is scheduled by the heterogeneous network within the first preset time.
The application upstream packets are sent to the local edge data center for processing, but may be subsequently scheduled by the upper network or other edge data centers. For example, when an application is started, the application is only promoted in a certain area, the subsequent applications are exploded, and users in other areas request application resources for a large number of times, and because other edge data centers or an upper network do not store corresponding resources of the application, the edge data centers storing the application resources are scheduled. In this case, the data of the application can be processed in other edge data centers or upper networks.
In some embodiments, when a frequency of a packet corresponding to an application identifier and sent to a local edge data center for processing reaches a first threshold within a first preset time, an application tag corresponding to the application identifier is preferentially sent to the local edge data center for processing from a corresponding uplink packet, and is modified into an identifier that the corresponding uplink packet is preferentially sent to an upper network for processing.
Aiming at the condition that the application label is used for identifying the priority of the corresponding uplink data packet sent to the local edge data center for processing or the priority of the uplink data packet sent to the upper network, the higher the scheduling frequency of the data packet corresponding to the application label by the different domain network within the first preset time is, the higher the priority of the data packet sent to the upper network is. The data packet corresponding to the application identifier can be continuously modified and sent to the upper network priority by the different domain network scheduling frequency within the first preset time.
For example, the application identifier is a destination address p1, the corresponding application tag is initially sent to the upper network with a priority level of 0, that is, initially set to be most prone to be locally processed in the accessed local edge data center, and after the application is accessed, all uplink data of the application tag is processed in the local edge data center. When calls from other data centers to the data packet corresponding to the p1 are received, the application label corresponding to the destination address p1 can be modified according to the calling frequency in the first preset time, and after the subsequent uplink data packet with the destination address p1 arrives, the uplink data packet is processed according to the new application label.
Because the application label can be modified according to the scheduling frequency of the data packet by the heterogeneous network, for a general data packet, the priority level for processing in the local edge data center can be set to be higher than the priority level for processing by the upper network initially. For a packet that needs to be processed by the upper network, the priority level of processing in the local edge data center may be configured to be lower than the priority level of processing sent to the upper network.
In some embodiments, in the case that the content in the uplink data packet is cached in the local edge data center, the content in the uplink data packet is sent to the heterogeneous data center for caching according to a frequency that the content in the uplink data packet is scheduled by the heterogeneous data center within a second preset time. For example, when the frequency of the content in the uplink data packet scheduled by the heterogeneous data center within the second preset time reaches the second threshold, the content in the uplink data packet is sent to the heterogeneous data center for caching.
For example, an uplink data packet carries a video uploaded by a user, the video is cached in a local edge data center, and then the video is frequently watched by the local user, the number of times that users in other regions request the video is very large, and since other edge data centers or an upper network do not store the video, the video is scheduled to the edge data center storing the video, and in this case, the video can be stored in other edge data centers or the upper network.
Corresponding content tags can be set for the content cached in the edge data center, and the content tags are used for identifying the priority of sending the corresponding content to the heterogeneous network. The content tag may be modified according to a frequency scheduled by the heterogeneous data center within a second preset time, and the higher the frequency scheduled by the heterogeneous data center is, the higher the priority transmitted to the heterogeneous network is. According to the content label and at least one of the load of the local edge data center and the load of the foreign domain data center, whether the corresponding content is sent to the foreign domain data center for processing can be determined. Reference may be made to the method for determining uplink packet processing according to the application tag in the foregoing embodiment, which is not described herein again.
If the content cached by the local edge data center is sent to the foreign domain data center for caching, the content is not cached locally any more, and then the request data packet of the corresponding content can also be sent to the corresponding cache address for processing. The heterogeneous data center comprises an upper cloud data center, other edge data centers and the like.
In some embodiments, when the uplink data packet is sent to the local edge data center for processing, the uplink data packet is analyzed, and the uplink data packet is forwarded to the upper network for processing according to the type of the content in the uplink data packet. Deep packet analysis can be performed on the uplink data packet, and whether the data packet is forwarded to the upper network for processing is determined according to the type of the carried content. For example, data packets and the like carrying hot videos cannot be identified as being sent to an upper network for processing according to the preliminary application identifier, and can be directly uploaded to the upper network for processing according to the content category after being analyzed. That is, the processing manner may be set for different data packets, and is not limited to the above-mentioned example.
According to the method of the embodiment, the application label is dynamically adjusted according to the condition that the content in the uplink data packet is scheduled by different data centers, so that the uplink data packet is dynamically shunted, the data processing efficiency is improved, and the loads of different data centers can be balanced.
The present disclosure also provides a data offloading device, which may be implemented in a hardware manner or a software manner, and may be disposed in the edge data center, or may be disposed independently. When the data offloading device is disposed in the edge data center, it may be disposed behind the capability open interface, that is, all uplink data packets need to pass through the data offloading device first and then enter the subsequent processing device or module. Described below in conjunction with fig. 3.
Fig. 3 is a block diagram of some embodiments of the data offloading device of the present disclosure. As shown in fig. 3, the apparatus 30 of this embodiment includes: a receiving module 302, a label management module 304 and a processing mode determining module 306.
A receiving module 302, configured to receive an uplink data packet.
And the tag management module 304 is configured to search for a corresponding application tag according to the application identifier corresponding to the uplink data packet.
The application label may include an identifier that the corresponding uplink data packet is preferentially sent to the local edge data center for processing, or an identifier that is preferentially sent to the upper network for processing.
And the processing mode determining module 306 is configured to determine, according to the application tag, to send the uplink data packet to the local edge data center for processing or to an upper network for processing.
The content in the upstream data packet sent to the local edge data center may be calculated or stored, or the corresponding resource may be scheduled according to the request information of the upstream data packet.
In some embodiments, the processing manner determining module 306 is configured to determine, according to the application tag and the load of the local edge data center, to send the uplink data packet to the local edge data center for processing or to send the uplink data packet to an upper network for processing; or, according to the application label and the load of the upper network, determining to send the uplink data packet to the local edge data center for processing or to the upper network for processing; or, according to the application label, the load of the local edge data center and the load of the upper network, determining to send the uplink data packet to the local edge data center for processing or to the upper network for processing.
Further embodiments of the data processing apparatus of the present disclosure are described below in conjunction with fig. 4.
Fig. 4 is a block diagram of another embodiment of the data splitting device of the present disclosure. As shown in fig. 4, the apparatus 40 of this embodiment includes: a receiving module 402, a tag management module 404, and a processing mode determining module 406, which are similar to the receiving module 302, the tag management module 304, and the processing mode determining module 306, respectively; and a tag modification module 408.
The tag modification module 408 is configured to modify the application tag corresponding to the application identifier according to a frequency that the uplink data packet sent to the local edge data center for processing is scheduled by the heterogeneous network within a first preset time.
In some embodiments, the tag modification module 408 is configured to, when a frequency of a packet, which is corresponding to an application identifier and is sent to the local edge data center for processing, reaches a first threshold within a first preset time and is scheduled by the heterogeneous network, preferentially send an identifier, which is used for processing by the local edge data center, of an application tag corresponding to the application identifier from a corresponding uplink data packet to the local edge data center, and modify the identifier into an identifier, which is used for processing by the upper network, of the corresponding uplink data packet.
In some embodiments, the apparatus 40 may further include: a content scheduling module 410.
The content scheduling module 410 is configured to, when the content in the uplink data packet is cached in the local edge data center, send the content in the uplink data packet to the heterogeneous data center for caching according to a frequency that the content in the uplink data packet is scheduled by the heterogeneous data center within a second preset time.
In some embodiments, the content scheduling module 410 is configured to send the content in the uplink data packet to the heterogeneous data center for caching when the frequency of the content in the uplink data packet scheduled by the heterogeneous data center within the second preset time reaches the second threshold.
The data offloading device in the embodiments of the present disclosure may be implemented by various computing devices or computer systems, which are described below in conjunction with fig. 5 and 6.
Fig. 5 is a block diagram of some embodiments of the data offloading device of the present disclosure. As shown in fig. 5, the apparatus 50 of this embodiment includes: a memory 510 and a processor 520 coupled to the memory 510, the processor 520 configured to perform a data splitting method in any of the embodiments of the disclosure based on instructions stored in the memory 510.
Memory 510 may include, for example, system memory, fixed non-volatile storage media, and the like. The system memory stores, for example, an operating system, an application program, a Boot Loader (Boot Loader), a database, and other programs.
Fig. 6 is a block diagram of another embodiment of the data splitting device of the present disclosure. As shown in fig. 6, the apparatus 60 of this embodiment includes: memory 610 and processor 620 are similar to memory 510 and processor 520, respectively. An input output interface 630, a network interface 640, a storage interface 650, and the like may also be included. These interfaces 630, 640, 650 and the connections between the memory 610 and the processor 620 may be, for example, via a bus 660. The input/output interface 630 provides a connection interface for input/output devices such as a display, a mouse, a keyboard, and a touch screen. The network interface 640 provides a connection interface for various networking devices, such as a database server or a cloud storage server. The storage interface 650 provides a connection interface for external storage devices such as an SD card and a usb disk.
The present disclosure also provides an edge data center, described below in conjunction with fig. 7.
FIG. 7 is a block diagram of some embodiments of an edge data center of the present disclosure. As shown in fig. 7, the edge data center 7 of this embodiment includes: the data distribution device 30/40/50/60 and the data processing module 72 of any of the previous embodiments.
And the data processing module 72 is configured to receive the uplink data packet sent by the data offloading device, and process the uplink data packet.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is meant to be illustrative of the preferred embodiments of the present disclosure and not to be taken as limiting the disclosure, and any modifications, equivalents, improvements and the like that are within the spirit and scope of the present disclosure are intended to be included therein.

Claims (15)

1. A data distribution method comprises the following steps:
receiving an uplink data packet;
searching a corresponding application label according to the application identifier corresponding to the uplink data packet;
according to the application label, determining to send the uplink data packet to a local edge data center for processing or to an upper network for processing;
modifying an application label corresponding to an application identifier according to the frequency of an uplink data packet which is correspondingly sent to a local edge data center for processing and is scheduled by a heterogeneous network within a first preset time;
the application label comprises an identifier which is used for preferentially sending the corresponding uplink data packet to the local edge data center for processing, or an identifier which is preferentially sent to the upper network for processing.
2. The data offloading method according to claim 1, wherein the sending the uplink data packet to a local edge data center for processing comprises:
and sending the uplink data packet to a local edge data center, calculating or storing the content in the uplink data packet, or scheduling corresponding resources according to the request information of the uplink data packet.
3. The data offloading method of claim 1, wherein,
and under the condition that the frequency of a data packet which is corresponding to one application identifier and sent to the local edge data center for processing reaches a first threshold value within first preset time and is scheduled by the heterogeneous network, preferentially sending the application label corresponding to the application identifier to the identifier of the local edge data center for processing from the corresponding uplink data packet, and modifying the identifier into the identifier of the corresponding uplink data packet which is preferentially sent to the upper network for processing.
4. The data offloading method of claim 1, further comprising:
and under the condition that the content in the uplink data packet is cached in a local edge data center, sending the content in the uplink data packet to a different domain data center for caching according to the frequency of the content in the uplink data packet scheduled by the different domain data center within a second preset time.
5. The data offloading method of claim 4, wherein,
and under the condition that the frequency of the content in the uplink data packet scheduled by the heterogeneous data center within a second preset time reaches a second threshold value, sending the content in the uplink data packet to the heterogeneous data center for caching.
6. The data offloading method of claim 1, wherein,
the determining to send the uplink data packet to a local edge data center for processing or to an upper network for processing according to the application tag includes:
determining to send the uplink data packet to the local edge data center for processing or to an upper network for processing according to the application label and the load of the local edge data center;
or, according to the application label and the load of the upper network, determining to send the uplink data packet to the local edge data center for processing or to the upper network for processing;
or determining to send the uplink data packet to the local edge data center for processing or to the upper network for processing according to the application label, the load of the local edge data center and the load of the upper network.
7. A data offloading device, comprising:
the receiving module is used for receiving the uplink data packet;
the label management module is used for searching a corresponding application label according to the application identifier corresponding to the uplink data packet;
the processing mode determining module is used for determining to send the uplink data packet to a local edge data center for processing or to an upper network for processing according to the application label;
the label modification module is used for modifying the application label corresponding to the application identifier according to the frequency of the uplink data packet which is sent to the local edge data center for processing and is scheduled by the heterogeneous network within a first preset time;
the application label comprises an identifier which is used for preferentially sending the corresponding uplink data packet to the local edge data center for processing, or an identifier which is preferentially sent to the upper network for processing.
8. The data splitting device of claim 7,
the content in the upstream data packets sent to the local edge data center is calculated or stored, or the corresponding resources are scheduled according to the request information of the upstream data packets.
9. The data splitting device of claim 7,
the label modification module is used for preferentially sending the application label corresponding to the application identifier to the identifier for processing by the local edge data center from the corresponding uplink data packet under the condition that the frequency scheduled by the heterogeneous network within the first preset time reaches the first threshold value in the data packet corresponding to the application identifier and sent to the local edge data center, and modifying the identifier for processing by the upper network into the identifier for processing by the corresponding uplink data packet preferentially.
10. The data offloading device of claim 7, further comprising:
and the content scheduling module is used for sending the content in the uplink data packet to the different domain data center for caching according to the frequency of the content in the uplink data packet scheduled by the different domain data center in a second preset time under the condition that the content in the uplink data packet is cached in the local edge data center.
11. The data offloading device of claim 10,
the content scheduling module is used for sending the content in the uplink data packet to the different domain data center for caching under the condition that the frequency of the content in the uplink data packet scheduled by the different domain data center within second preset time reaches a second threshold value.
12. The data offloading device of claim 7,
the processing mode determining module is used for determining to send the uplink data packet to the local edge data center for processing or to send the uplink data packet to an upper network for processing according to the application label and the load of the local edge data center; or, according to the application label and the load of the upper network, determining to send the uplink data packet to the local edge data center for processing or to the upper network for processing; or, according to the application label, the load of the local edge data center and the load of the upper network, determining to send the uplink data packet to the local edge data center for processing or to the upper network for processing.
13. A data offloading device, comprising:
a memory; and
a processor coupled to the memory, the processor configured to perform the data offloading method of any of claims 1-6 based on instructions stored in the memory device.
14. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
15. An edge data center comprising: a data distribution device as claimed in any one of claims 7 to 13; and
and the data processing module is used for receiving the uplink data packet sent by the data distribution device and processing the uplink data packet.
CN201810613236.0A 2018-06-14 2018-06-14 Data distribution method and device, edge data center and readable storage medium Active CN110611937B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810613236.0A CN110611937B (en) 2018-06-14 2018-06-14 Data distribution method and device, edge data center and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810613236.0A CN110611937B (en) 2018-06-14 2018-06-14 Data distribution method and device, edge data center and readable storage medium

Publications (2)

Publication Number Publication Date
CN110611937A CN110611937A (en) 2019-12-24
CN110611937B true CN110611937B (en) 2023-04-07

Family

ID=68887893

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810613236.0A Active CN110611937B (en) 2018-06-14 2018-06-14 Data distribution method and device, edge data center and readable storage medium

Country Status (1)

Country Link
CN (1) CN110611937B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113068241B (en) * 2020-01-02 2023-03-31 ***通信有限公司研究院 Terminal data routing method, base station and medium
CN113095781B (en) * 2021-04-12 2022-09-06 山东大卫国际建筑设计有限公司 Temperature control equipment control method, equipment and medium based on edge calculation
CN115638833B (en) * 2022-12-23 2023-03-31 保定网城软件股份有限公司 Monitoring data processing method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102238634A (en) * 2010-05-05 2011-11-09 ***通信集团公司 Method and device for data distribution in wireless network
CN105471748A (en) * 2015-12-29 2016-04-06 北京神州绿盟信息安全科技股份有限公司 Application shunting method and device
WO2016130058A1 (en) * 2015-02-11 2016-08-18 Telefonaktiebolaget Lm Ericsson (Publ) A node and method for processing copied uplink or downlink local service cloud traffic
WO2017036248A1 (en) * 2015-08-31 2017-03-09 大唐移动通信设备有限公司 Data transmission method, device and system
WO2017113287A1 (en) * 2015-12-31 2017-07-06 华为技术有限公司 Method, base station, terminal and gateway for data transmission
CN107517480A (en) * 2016-06-16 2017-12-26 中兴通讯股份有限公司 The method, apparatus and system of session establishment
CN107566429A (en) * 2016-06-30 2018-01-09 中兴通讯股份有限公司 Base station, the response method of access request, apparatus and system
WO2018054272A1 (en) * 2016-09-22 2018-03-29 中兴通讯股份有限公司 Data transmission method and device, and computer storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102238634A (en) * 2010-05-05 2011-11-09 ***通信集团公司 Method and device for data distribution in wireless network
WO2016130058A1 (en) * 2015-02-11 2016-08-18 Telefonaktiebolaget Lm Ericsson (Publ) A node and method for processing copied uplink or downlink local service cloud traffic
WO2017036248A1 (en) * 2015-08-31 2017-03-09 大唐移动通信设备有限公司 Data transmission method, device and system
CN105471748A (en) * 2015-12-29 2016-04-06 北京神州绿盟信息安全科技股份有限公司 Application shunting method and device
WO2017113287A1 (en) * 2015-12-31 2017-07-06 华为技术有限公司 Method, base station, terminal and gateway for data transmission
CN107517480A (en) * 2016-06-16 2017-12-26 中兴通讯股份有限公司 The method, apparatus and system of session establishment
CN107566429A (en) * 2016-06-30 2018-01-09 中兴通讯股份有限公司 Base station, the response method of access request, apparatus and system
WO2018054272A1 (en) * 2016-09-22 2018-03-29 中兴通讯股份有限公司 Data transmission method and device, and computer storage medium

Also Published As

Publication number Publication date
CN110611937A (en) 2019-12-24

Similar Documents

Publication Publication Date Title
CN108881448B (en) API request processing method and device
CN110048927B (en) Communication method and communication device
US10230627B2 (en) Service path allocation method, router and service execution entity
US20170142177A1 (en) Method and system for network dispatching
WO2018152919A1 (en) Path selection method and system, network acceleration node, and network acceleration system
CN110611937B (en) Data distribution method and device, edge data center and readable storage medium
WO2022100318A1 (en) Fog node scheduling method and apparatus, and computer device and storage medium
CN109495467B (en) Method and device for updating interception rule and computer readable storage medium
CN109756584B (en) Domain name resolution method, domain name resolution device and computer readable storage medium
CN112737817B (en) Network slice resource dynamic partitioning method and device based on multi-parameter determination
CN107769992B (en) Message parsing and shunting method and device
CN110198332B (en) Scheduling method and device for content distribution network node and storage medium
CN108206788B (en) Traffic service identification method and related equipment
CN113467910A (en) Overload protection scheduling method based on service grade
CN111224831B (en) Method and system for generating call ticket
CN111970539B (en) Data coding method based on deep learning and cloud computing service and big data platform
US20230045979A1 (en) Controlling delivery via unmanned delivery service through allocated network resources
CN111901396B (en) Resource request response method, redirection server and decision distribution server
CN110868323B (en) Bandwidth control method, device, equipment and medium
CN113596105B (en) Content acquisition method, edge node and computer readable storage medium
CN113422699B (en) Data stream processing method and device, computer readable storage medium and electronic equipment
WO2017193814A1 (en) Service chain generation method and system
CN110933121A (en) Connection establishing method, communication processing method and device and communication equipment
CN113453285B (en) Resource adjusting method, device and storage medium
WO2022240590A1 (en) Application-centric design for 5g and edge computing applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant