CN113873302A - Content distribution method, content distribution device, storage medium and electronic equipment - Google Patents

Content distribution method, content distribution device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113873302A
CN113873302A CN202111152471.0A CN202111152471A CN113873302A CN 113873302 A CN113873302 A CN 113873302A CN 202111152471 A CN202111152471 A CN 202111152471A CN 113873302 A CN113873302 A CN 113873302A
Authority
CN
China
Prior art keywords
target
edge node
resource
cache
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111152471.0A
Other languages
Chinese (zh)
Other versions
CN113873302B (en
Inventor
刘贵荣
杨泽森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingsoft Cloud Network Technology Co Ltd
Original Assignee
Beijing Kingsoft Cloud Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Cloud Network Technology Co Ltd filed Critical Beijing Kingsoft Cloud Network Technology Co Ltd
Priority to CN202111152471.0A priority Critical patent/CN113873302B/en
Publication of CN113873302A publication Critical patent/CN113873302A/en
Application granted granted Critical
Publication of CN113873302B publication Critical patent/CN113873302B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26208Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists the scheduling operation being performed under constraints
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a content distribution method, a content distribution device, a storage medium and electronic equipment. The method comprises the following steps: acquiring a target cache request sent by a target edge node, wherein the target cache request is used for requesting to cache a target resource in a source station or a middle layer into the target edge node; and sending a target cache instruction to the target edge node, wherein the target cache instruction is used for indicating the target edge node to acquire the target resource from the first edge node, and the first edge node is an edge node which has requested the target resource in advance. The invention solves the technical problem of high distribution cost in the content distribution process.

Description

Content distribution method, content distribution device, storage medium and electronic equipment
Technical Field
The invention relates to the field of intelligent equipment, in particular to a content distribution method, a content distribution device, a storage medium and electronic equipment.
Background
In the prior art, in the content distribution process, if a user acquires a target resource from an edge node, and the edge node does not cache the target resource, the edge node needs to acquire the target resource from a source station or a middle layer and then issue the target resource to the user. And if all resources are acquired to the source station or the middle layer, the bandwidth distribution cost is increased.
Disclosure of Invention
The embodiment of the invention provides a content distribution method, a content distribution device, a storage medium and electronic equipment, which are used for at least solving the technical problem of high distribution cost in a content distribution process.
According to an aspect of the embodiments of the present invention, there is provided a content distribution method applied to a scheduling center, including: acquiring a target cache request sent by a target edge node, wherein the target cache request is used for requesting to cache a target resource in a source station or a middle layer into the target edge node; and sending a target cache instruction to the target edge node, wherein the target cache instruction is used for instructing the target edge node to acquire the target resource from a first edge node, and the first edge node is an edge node which has requested the target resource in advance.
According to another aspect of the embodiments of the present invention, there is also provided a content distribution method applied to an edge node, including: sending a target cache request to a dispatching center, wherein the target cache request is used for requesting to cache target resources in a source station or a middle layer into a target edge node; and receiving a target cache instruction returned by the dispatching center, wherein the target cache instruction is used for instructing the target edge node to acquire the target resource from a first edge node, and the first edge node is an edge node which has requested the target resource in advance.
According to another aspect of the embodiments of the present invention, there is provided a content distribution apparatus applied to a scheduling center, including: an obtaining unit, configured to obtain a target cache request sent by a target edge node, where the target cache request is used to request that a target resource in a source station or a middle layer is cached in the target edge node; a first sending unit, configured to send a target cache instruction to the target edge node, where the target cache instruction is used to instruct the target edge node to obtain the target resource from a first edge node, and the first edge node is an edge node that has requested the target resource in advance.
As an optional example, the apparatus further includes: a first recording unit, configured to, before obtaining a target cache request sent by a target edge node, record node information of the edge node that requests the resource in an access record of the resource when obtaining a cache request that requests to cache any resource in the source station or the middle layer.
As an optional example, the apparatus further includes: a second sending unit, configured to, after obtaining a target caching request sent by a target edge node, send a first caching instruction to the target edge node when an access record of the target resource does not include node information of any edge node, where the first caching instruction is used to instruct the target edge node to cache the target resource from the source station or the middle layer; and a second recording unit, configured to record the node information of the target edge node in an access record of the target resource.
As an optional example, the apparatus further includes: a third sending unit, configured to, after obtaining a target cache request sent by a target edge node, send a second cache instruction to a second edge node and send a third cache instruction to the target edge node when node information of any edge node is not included in an access record of the target resource, where the second cache instruction is used to instruct the second edge node to obtain the target resource from the source station or a middle layer, the third cache instruction is used to instruct the target edge node to obtain the target resource from the second edge node, and a cost of bandwidth consumed by the target edge node to obtain the target resource from the second edge node is smaller than a first threshold; a third recording unit, configured to record the node information of the second edge node and the node information of the target edge node in an access record of the target resource.
As an optional example, the first sending unit includes: a selecting module, configured to select one edge node from the plurality of edge nodes as the first edge node when the access record of the target resource indicates that the target resource has been accessed by the plurality of edge nodes.
As an optional example, the selecting module includes: a first determining submodule configured to use an edge node closest to the target edge node among the plurality of edge nodes as the first edge node.
As an optional example, the selecting module includes: and a second determining submodule configured to use an edge node with a smallest load among the plurality of edge nodes as the first edge node.
As an optional example, the obtaining unit includes: and the acquisition module is used for acquiring the domain name and the URL address of the target resource and the node information of the target edge node in the target cache request.
According to another aspect of the embodiments of the present invention, there is provided a content distribution apparatus, applied to an edge node, including: a sending unit, configured to send a target cache request to a scheduling center, where the target cache request is used to request that a target resource in a source station or a middle layer is cached in a target edge node; a first receiving unit, configured to receive a target cache instruction returned by the scheduling center, where the target cache instruction is used to instruct the target edge node to acquire the target resource from a first edge node, and the first edge node is an edge node that has requested the target resource in advance.
As an optional example, the scheduling center is further configured to record node information of an edge node that requests the resource in an access record of the resource when a cache request that requests to cache any resource in the source station or the middle tier is obtained, where the apparatus further includes: a second receiving unit, configured to receive a first cache instruction sent by a scheduling center when an access record of the target resource does not include node information of any edge node after sending a target cache request to the scheduling center, where the first cache instruction is used to instruct the target edge node to cache the target resource from the source station or the middle layer; and the first cache unit is used for responding to the first cache instruction and requesting the source station or the middle layer to cache the target resource.
As an optional example, the scheduling center is further configured to record node information of an edge node that requests the resource in an access record of the resource when a cache request that requests to cache any resource in the source station or the middle tier is obtained, where the apparatus further includes: a third receiving unit, configured to receive a third cache instruction sent by a scheduling center, when node information of any edge node is not included in an access record of the target resource after sending a target cache request to the scheduling center, where the third cache instruction is used to instruct the target edge node to acquire the target resource from a second edge node, a cost of bandwidth consumed by the target edge node to acquire the target resource from the second edge node is less than a first threshold, and the scheduling center is further used to send a second cache instruction to the second edge node, and the second cache instruction is used to instruct the second edge node to acquire the target resource from the source station or the middle layer; and a second cache unit, configured to respond to the third cache instruction and obtain the target resource from the second edge node.
According to still another aspect of the embodiments of the present invention, there is also provided a storage medium having a computer program stored therein, wherein the computer program is configured to execute the above-mentioned content distribution method when running.
According to still another aspect of the embodiments of the present invention, there is also provided an electronic device including a memory in which a computer program is stored and a processor configured to execute the content distribution method described above by the computer program.
In the embodiment of the invention, a target cache request sent by a target edge node is obtained, wherein the target cache request is used for requesting to cache a target resource in a source station or a middle layer into the target edge node; in the method, when a user acquires a target resource from an edge node and the edge node does not cache the target resource, the edge node can acquire the target resource from other edge nodes which have requested the target resource, and does not need to acquire the target resource from a source station or a middle layer, so that the effect of reducing the distribution cost of the content distribution network is achieved, and the technical problem of high distribution cost of the content distribution network is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow diagram of an alternative method of content distribution according to an embodiment of the present invention;
FIG. 2 is a content distribution architecture diagram of an alternative content distribution method according to an embodiment of the present invention;
FIG. 3 is a content distribution architecture diagram of another alternative content distribution method according to an embodiment of the present invention;
FIG. 4 is a schematic view of an access record of an alternative content distribution method according to an embodiment of the present invention;
FIG. 5 is a system flow diagram of an alternative method of content distribution according to an embodiment of the present invention;
FIG. 6 is a system flow diagram of an alternative method of content distribution according to an embodiment of the present invention;
FIG. 7 is a flow diagram of an alternative method of content distribution according to an embodiment of the present invention;
FIG. 8 is a flow diagram of yet another alternative content distribution method according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an alternative content distribution apparatus according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of another alternative content distribution apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to a first aspect of the embodiments of the present invention, there is provided a content distribution method applied to a scheduling center, optionally, as shown in fig. 1, the method includes:
s102, a target cache request sent by a target edge node is obtained, wherein the target cache request is used for requesting to cache a target resource in a source station or a middle layer into the target edge node;
and S104, sending a target cache instruction to the target edge node, wherein the target cache instruction is used for indicating the target edge node to acquire the target resource from the first edge node, and the first edge node is an edge node which has requested the target resource in advance.
Optionally, in this embodiment, a Content Delivery Network (CDN) is used. The CDN is a content delivery network constructed on the network, and by means of edge servers (edge nodes) deployed in various places, through functional modules of load balancing, content delivery, scheduling, and the like of a central platform, a user can obtain required content nearby, network congestion is reduced, and the access response speed and hit rate of the user are improved.
And the global scheduling system of the CDN schedules the access request of the user to the CDN edge node. After a user accesses a CDN edge node, if the current access resource exists on the node, the node directly returns the content to the user, if the access resource is not on the node, the edge node pulls the content back to the source station, the content is returned to the user and simultaneously the memory is cached on the edge node, and the next access to the resource is directly returned. To reduce stress on the source station, multiple cache layers may be provided. The purposes of multi-level caching and back source convergence are achieved. As shown in fig. 2, fig. 2 is a schematic diagram of an alternative CDN. In fig. 2, the user acquires the target resource from the edge node 1, and the edge node 1 does not cache the target resource, so that the edge node 1 can acquire the target resource from the middle layer 1. And if the middle layer 1 has no target resource, acquiring the target resource from the source station, and finally issuing the target resource to the user.
Optionally, in this embodiment, the target edge node may be any one of CDN edge nodes. After receiving a request of a user for requesting a target resource, the target edge node needs to acquire the target resource if the target edge node does not have the target resource. In this embodiment, the target edge node may send a target cache request to the scheduling center, and the scheduling center may determine which edge nodes have requested the target resource after acquiring the target cache request sent by the target edge node. And then, the node information of the first edge node which requests the target resource is sent to the target edge node, and the target edge node acquires the target resource from the first edge node.
Optionally, in this embodiment, the type of the target resource is not limited. And may be any one or any combination of multiple of video, audio, pictures, text, files, and the like.
Optionally, the edge nodes in this embodiment may be arranged at different positions as needed. The edge node and the source station or the middle layer can upload and download data. The user can access the required data by accessing the nearby edge node. The source station or middle tier or edge node may be a server.
As shown in fig. 3, when a user requests a target resource from an edge node 1, a scheduling center obtains a target cache request and returns a target cache instruction to the edge node 1, and the edge node 1 obtains the target resource from an edge node 2 and issues the target resource to the user.
Through the method, when the user acquires the target resource from the edge node and the edge node does not cache the target resource, the edge node can acquire the target resource from other edge nodes which have requested the target resource, without acquiring the target resource from a source station or a middle layer, thereby achieving the effect of reducing the distribution cost of the content distribution network.
As an optional implementation manner, before obtaining the target cache request sent by the target edge node, the method further includes:
under the condition of acquiring a cache request for requesting any resource in a cache source station or a middle layer, recording node information of an edge node requesting the resource into an access record of the resource.
Optionally, in this embodiment, the scheduling center may record an access record of each resource accessed by the edge node. If any edge node accesses a resource of the source station or the middle layer, a record of the edge node accessing the resource is correspondingly recorded.
Optionally, in this embodiment, an access record table may be established for the accessed resource, and node information of the edge node accessing the resource is recorded in the access record table. The node information may be a node unique flag of the edge node.
For example, as shown in fig. 4, fig. 4 is an alternative access record of an edge node. After the target resource 1 in the source station or the middle layer is accessed by the edge nodes 1 to 3, the accessed access record of the target resource 1 is recorded in the dispatching center. And the edge node 3 requests the target resource 2 and records the accessed access record of the target resource 2 in the dispatching center. Fig. 4 is only an example, and is not a limitation on the access record of the target resource of the present application.
As an optional implementation manner, after obtaining the target cache request sent by the target edge node, the method further includes:
under the condition that the access record of the target resource does not include node information of any edge node, sending a first cache instruction to the target edge node, wherein the first cache instruction is used for indicating the target edge node to cache the target resource from a source station or a middle layer;
and recording the node information of the target edge node into the access record of the target resource.
Optionally, in this embodiment, the target edge node may obtain the target resource from the first edge node that has requested the target resource. However, if the target resource is not requested by any edge node, the dispatch center cannot return the first edge node for the target edge node. Because, none of the edge nodes can act as the first edge node. At this point, the dispatch center may return the first cache instruction to the target edge node. And under the condition that the target edge node receives the first cache instruction, acquiring the target resource from the source station or the middle layer. The flowchart is shown in steps S502 to S512 of fig. 5. And the user requests the target resource from the edge node, and if the edge node does not have the target resource, the edge node sends a target cache request to the dispatching center. And the dispatch center determines that none of the edge nodes has accessed the target resource. Therefore, the first cache instruction is returned to the edge node, the edge node acquires the target resource from the source station or the middle layer, and the target resource is returned to the user. The dispatching center can save the node information of the edge node into the access record of the target resource.
As an optional implementation manner, after obtaining the target cache request sent by the target edge node, the method further includes:
under the condition that the access record of the target resource does not include node information of any edge node, sending a second cache instruction to a second edge node and sending a third cache instruction to the target edge node, wherein the second cache instruction is used for indicating the second edge node to acquire the target resource from a source station or a middle layer, the third cache instruction is used for indicating the target edge node to acquire the target resource from the second edge node, and the cost of bandwidth consumed by the target edge node to acquire the target resource from the second edge node is less than a first threshold value;
and recording the node information of the second edge node and the node information of the target edge node into the access record of the target resource.
Optionally, in this embodiment, the target edge node may obtain the target resource from the first edge node that has requested the target resource. However, if the target resource is not requested by any edge node, the dispatch center cannot return the first edge node for the target edge node. Because, none of the edge nodes can act as the first edge node. At this time, the scheduling center may select one edge node from all the edge nodes as the second edge node, and return node information of the second edge node to the target edge node, so that the target edge node acquires the target resource from the second edge node. Of course, the second edge node may not itself store the target resource because it has not accessed the target resource. Therefore, the second edge node needs to acquire the target resource from the source station or the middle layer, and then the target edge node acquires the target resource from the second edge node. The dispatching center records the node information of the second edge node and the target edge node into the access record of the target resource. The flowchart may be as shown in steps S602 to S616 of fig. 6. The user requests the target resource from the target edge node, the target edge node does not cache the target resource, the target edge node sends a target cache request to the dispatching center, and the dispatching center determines that no edge node accesses the target resource. The scheduling center sends a third cache instruction to the target edge node and sends a second cache instruction to the second edge node. The second edge node is the edge node determined by the dispatch center. The cost of bandwidth consumed by the target edge node to obtain the target resource from the second edge node is less than the first threshold. And the second edge node acquires the target resource from the source station or the middle layer, and then the target edge node acquires the target resource from the second edge node and returns the target resource to the user.
As an optional implementation, sending the target caching instruction to the target edge node includes:
and in the case that the access record of the target resource indicates that a plurality of edge nodes access the target resource, selecting one edge node from the plurality of edge nodes as a first edge node.
Optionally, in this embodiment, the target edge node may obtain the target resource from the first edge node that has requested the target resource. If there are more than one edge node requesting the target resource, the dispatch center may select one edge node from the more than one edge node requesting the target resource as the first edge node.
As an optional implementation, selecting one edge node from the plurality of edge nodes as the first edge node comprises:
and taking the edge node which is closest to the target edge node in the plurality of edge nodes as a first edge node.
Optionally, in this embodiment, when there are a plurality of edge nodes that request the target resource, and the scheduling center selects one edge node as the target edge node, the distance between each edge node that requests the target resource and the target edge node may be obtained, and the edge node with the smallest distance is used as the first edge node, and the target edge node obtains the target resource from the first edge node.
As an optional implementation, selecting one edge node from the plurality of edge nodes as the first edge node comprises:
and taking the edge node with the minimum load as a first edge node in the plurality of edge nodes.
Optionally, in this embodiment, when there are a plurality of edge nodes that request the target resource, and the scheduling center selects one edge node as the target edge node, the current load condition of each edge node that requests the target resource may be obtained, and a larger load indicates that the edge node is busy. And taking the edge node with the minimum load as a first edge node, and acquiring the target resource from the first edge node by the target edge node.
As an optional implementation manner, the obtaining of the target cache request sent by the target edge node includes:
and acquiring the domain name and URL address of the target resource and node information of the target edge node in the target cache request.
Optionally, in this embodiment, the domain name and the URL address of the target resource are carried in the target cache request by the target edge node, and the scheduling center determines the target resource requested by the target edge node according to the domain name and the URL address. And recording the node information of the target edge node into an access record for accessing the target resource.
The architecture of this embodiment is as shown in fig. 3, and the requests of this embodiment and other return sources all occur when the resources do not hit the target resources, and the user requests the target resources from the edge node, the edge node queries the target resources, and the target resources are directly returned when the resources hit. If the edge node has no target resource, when a request accesses the edge node, a service program on the node sends an inquiry request to a dispatching center, and the request carries key information such as a domain name, an access URL (uniform resource locator), a request node name and the like. The dispatching center will return 2 states according to the request information, one is 302 responses directed to other nodes, 302 in this embodiment refers to a special state in the http protocol, and is used to implement the jump function. The other is an unseen resource information response in which the resource information is not found, and the service program makes different behaviors according to different response state codes after receiving the two responses. Or respond 302 to fetch content to other nodes, such as the 4-way of fig. 3. Or obtain the content according to the back-to-source path back to the middle layer or the source station.
The dispatching center records the access information of all resources, stores retrieval and response. When a request accesses the scheduling center, the scheduling center records the request no matter whether the resource is accessed by other nodes or not, and the scheduling center considers that the node certainly caches the content of the resource, because the node acquires the content cache and responds to the user no matter what status code is returned in the access. This is also the source of the data of the dispatch center, and it can be considered that the data of the dispatch center is triggered by the request, and the more different requests, the more data are stored.
Fig. 7 is a flow of processing by the dispatch center, when a request comes, it is checked whether the request parameter is legal, and if not, a code indicating the illegal is directly returned. If the request has no problem, the database is accessed according to the request resource to confirm whether the resource has access, if so, a node is selected from the accessed nodes according to an algorithm, packaged into a 302 response and returned to the requester. If the resource has no node access, directly returning the response code of the unseen resource information. Regardless of whether a 302 response or a response that does not find resource information is returned, one thing needs to be done and the node that accessed this time is appended to the access list of this resource so that the next request can request content from the accessed node.
For example, the dispatching center receives a request, the resource is http:// abc.com/1.txt, the requesting node is nncm01, and since the first access is performed, the list of access nodes of the dispatching center about the resource is empty, so that the dispatching center directly returns XXX codes and simultaneously puts the node nncm01 into the access list of the resource. Then a request is sent again, the http:// abc.com/1.txt is still accessed, the request node is nncm02, the scheduling center knows that nncm01 has been accessed before according to the access list of the resource, and only one node is currently accessed, so the nncm01 node information is directly assembled into 302 response http:// { ip of nncm01 }/abc.com/1.txt, and the response is returned to nncm 02. Thus, nncm02 obtains content from the 302 response to nncm01, rather than going to the middle layer. At the same time, the dispatch center will add nncm02 to the access list for http:// abc. com/1. txt. At this time, the access list of the resource has two nodes of nncm01 and nncm 02. If other nodes access the resource, the scheduling center returns one of the two nodes nncm01 and nncm02 to the new node, and the new node is added to the access list.
Bandwidth revenue shows that, assuming a requested bandwidth is 1M, when sourcing back to the middle tier, 1M of bandwidth is generated and the paid bandwidth is 1M. After the invention is adopted, the request is transferred to other edge nodes, namely the bandwidth of the edge node is increased by 1M, but the bandwidth of the middle layer is reduced by 1M. If the paid bandwidth of the edge bandwidth is 100M (if it is less than 100M, it needs to pay by 100M), if the increased 1M does not make the paid bandwidth exceed 100M, the transfer proceeds by 100%, if the charged bandwidth is increased by 0.5M, i.e. the edge paid bandwidth is 100.5M, the transfer proceeds by 50% in (1-0.5/1), and in the extreme case, the paid bandwidth is increased by 1M to 101M, the transfer proceeds by 0. In practical terms, marginal paid bandwidth is mostly out of service, so in most cases the bandwidth benefit is positive. In addition, by adding a strategy control means, the space of the profit can be enlarged.
Since the dispatching center can control the return of the response 302, in this embodiment, when the dispatching center determines that none of the edge nodes has requested the target resource, the dispatching center actively returns the node information of the second edge node with low cost (such as a metropolitan area network and other non-mainstream nodes) to the target edge node. And actively guiding the request to a node with low cost, wherein the second edge node acquires the target resource from the source station or the middle layer, and the target edge node acquires the target resource from the second edge node. The trend of the request is flexibly controlled by combining the means of a bandwidth adjustment strategy, a node selection algorithm and the like, and the purposes of reducing the cost, improving the utilization rate of the non-mainstream nodes and the like are achieved.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
According to another aspect of the embodiments of the present application, there is also provided a content distribution method applied to an edge node, as shown in fig. 8, including:
s802, sending a target cache request to a dispatching center, wherein the target cache request is used for requesting to cache target resources in a source station or a middle layer into a target edge node;
and S804, receiving a target cache instruction returned by the dispatching center, wherein the target cache instruction is used for indicating a target edge node to acquire a target resource from a first edge node, and the first edge node is an edge node which has requested the target resource in advance.
Optionally, in this embodiment, a Content Delivery Network (CDN) is used. The CDN is a content delivery network constructed on the network, and by means of edge servers (edge nodes) deployed in various places, through functional modules of load balancing, content delivery, scheduling, and the like of a central platform, a user can obtain required content nearby, network congestion is reduced, and the access response speed and hit rate of the user are improved.
And the global scheduling system of the CDN schedules the access request of the user to the CDN edge node. After a user accesses a CDN edge node, if the current access resource exists on the node, the node directly returns the content to the user, if the access resource is not on the node, the edge node pulls the content back to the source station, the content is returned to the user and simultaneously the memory is cached on the edge node, and the next access to the resource is directly returned. To reduce stress on the source station, multiple cache layers may be provided. The purposes of multi-level caching and back source convergence are achieved. As shown in fig. 2, fig. 2 is a schematic diagram of an alternative CDN. In fig. 2, the user acquires the target resource from the edge node 1, and the edge node 1 does not cache the target resource, so that the edge node 1 can acquire the target resource from the middle layer 1. And if the middle layer 1 has no target resource, acquiring the target resource from the source station, and finally issuing the target resource to the user.
Optionally, in this embodiment, the target edge node may be any one of CDN edge nodes. After receiving a request of a user for requesting a target resource, the target edge node needs to acquire the target resource if the target edge node does not have the target resource. In this embodiment, the target edge node may send a target cache request to the scheduling center, and the scheduling center may determine which edge nodes have requested the target resource after acquiring the target cache request sent by the target edge node. And then, the node information of the first edge node which requests the target resource is sent to the target edge node, and the target edge node acquires the target resource from the first edge node.
Optionally, in this embodiment, the type of the target resource is not limited. And may be any one or any combination of multiple of video, audio, pictures, text, files, and the like.
Optionally, the edge nodes in this embodiment may be arranged at different positions as needed. The edge node and the source station or the middle layer can upload and download data. The user can access the required data by accessing the nearby edge node. The source station or middle tier or edge node may be a server.
As shown in fig. 3, when a user requests a target resource from an edge node 1, a scheduling center obtains a target cache request and returns a target cache instruction to the edge node 1, and the edge node 1 obtains the target resource from an edge node 2 and issues the target resource to the user.
Through the method, when the user acquires the target resource from the edge node and the edge node does not cache the target resource, the edge node can acquire the target resource from other edge nodes which have requested the target resource, without acquiring the target resource from a source station or a middle layer, thereby achieving the effect of reducing the distribution cost of the content distribution network.
For other examples of this embodiment, please refer to the above examples, which are not described herein again.
According to another aspect of the embodiments of the present application, there is also provided a content distribution apparatus applied to a scheduling center, as shown in fig. 9, including:
an obtaining unit 902, configured to obtain a target cache request sent by a target edge node, where the target cache request is used to request that a target resource in a source station or a middle layer is cached in the target edge node;
a first sending unit 904, configured to send a target cache instruction to a target edge node, where the target cache instruction is used to instruct the target edge node to obtain a target resource from a first edge node, and the first edge node is an edge node that has requested the target resource in advance.
Optionally, in this embodiment, a Content Delivery Network (CDN) is used. The CDN is a content delivery network constructed on the network, and by means of edge servers (edge nodes) deployed in various places, through functional modules of load balancing, content delivery, scheduling, and the like of a central platform, a user can obtain required content nearby, network congestion is reduced, and the access response speed and hit rate of the user are improved.
And the global scheduling system of the CDN schedules the access request of the user to the CDN edge node. After a user accesses a CDN edge node, if the current access resource exists on the node, the node directly returns the content to the user, if the access resource is not on the node, the edge node pulls the content back to the source station, the content is returned to the user and simultaneously the memory is cached on the edge node, and the next access to the resource is directly returned. To reduce stress on the source station, multiple cache layers may be provided. The purposes of multi-level caching and back source convergence are achieved. As shown in fig. 2, fig. 2 is a schematic diagram of an alternative CDN. In fig. 2, the user acquires the target resource from the edge node 1, and the edge node 1 does not cache the target resource, so that the edge node 1 can acquire the target resource from the middle layer 1. And if the middle layer 1 has no target resource, acquiring the target resource from the source station, and finally issuing the target resource to the user.
Optionally, in this embodiment, the target edge node may be any one of CDN edge nodes. After receiving a request of a user for requesting a target resource, the target edge node needs to acquire the target resource if the target edge node does not have the target resource. In this embodiment, the target edge node may send a target cache request to the scheduling center, and the scheduling center may determine which edge nodes have requested the target resource after acquiring the target cache request sent by the target edge node. And then, the node information of the first edge node which requests the target resource is sent to the target edge node, and the target edge node acquires the target resource from the first edge node.
Optionally, in this embodiment, the type of the target resource is not limited. And may be any one or any combination of multiple of video, audio, pictures, text, files, and the like.
Optionally, the edge nodes in this embodiment may be arranged at different positions as needed. The edge node and the source station or the middle layer can upload and download data. The user can access the required data by accessing the nearby edge node. The source station or middle tier or edge node may be a server.
As shown in fig. 3, when a user requests a target resource from an edge node 1, a scheduling center obtains a target cache request and returns a target cache instruction to the edge node 1, and the edge node 1 obtains the target resource from an edge node 2 and issues the target resource to the user.
Through the method, when the user acquires the target resource from the edge node and the edge node does not cache the target resource, the edge node can acquire the target resource from other edge nodes which have requested the target resource, without acquiring the target resource from a source station or a middle layer, thereby achieving the effect of reducing the distribution cost of the content distribution network.
For other examples of this embodiment, please refer to the above examples, which are not described herein again.
According to another aspect of the embodiments of the present application, there is also provided a content distribution apparatus, applied to an edge node, as shown in fig. 10, including:
a sending unit 1002, configured to send a target cache request to a scheduling center, where the target cache request is used to request that a target resource in a source station or a middle layer is cached in a target edge node;
the first receiving unit 1004 is configured to receive a target cache instruction returned by the scheduling center, where the target cache instruction is used to instruct a target edge node to acquire a target resource from the first edge node, and the first edge node is an edge node that has requested the target resource in advance.
Optionally, in this embodiment, a Content Delivery Network (CDN) is used. The CDN is a content delivery network constructed on the network, and by means of edge servers (edge nodes) deployed in various places, through functional modules of load balancing, content delivery, scheduling, and the like of a central platform, a user can obtain required content nearby, network congestion is reduced, and the access response speed and hit rate of the user are improved.
And the global scheduling system of the CDN schedules the access request of the user to the CDN edge node. After a user accesses a CDN edge node, if the current access resource exists on the node, the node directly returns the content to the user, if the access resource is not on the node, the edge node pulls the content back to the source station, the content is returned to the user and simultaneously the memory is cached on the edge node, and the next access to the resource is directly returned. To reduce stress on the source station, multiple cache layers may be provided. The purposes of multi-level caching and back source convergence are achieved. As shown in fig. 2, fig. 2 is a schematic diagram of an alternative CDN. In fig. 2, the user acquires the target resource from the edge node 1, and the edge node 1 does not cache the target resource, so that the edge node 1 can acquire the target resource from the middle layer 1. And if the middle layer 1 has no target resource, acquiring the target resource from the source station, and finally issuing the target resource to the user.
Optionally, in this embodiment, the target edge node may be any one of CDN edge nodes. After receiving a request of a user for requesting a target resource, the target edge node needs to acquire the target resource if the target edge node does not have the target resource. In this embodiment, the target edge node may send a target cache request to the scheduling center, and the scheduling center may determine which edge nodes have requested the target resource after acquiring the target cache request sent by the target edge node. And then, the node information of the first edge node which requests the target resource is sent to the target edge node, and the target edge node acquires the target resource from the first edge node.
Optionally, in this embodiment, the type of the target resource is not limited. And may be any one or any combination of multiple of video, audio, pictures, text, files, and the like.
Optionally, the edge nodes in this embodiment may be arranged at different positions as needed. The edge node and the source station or the middle layer can upload and download data. The user can access the required data by accessing the nearby edge node. The source station or middle tier or edge node may be a server.
As shown in fig. 3, when a user requests a target resource from an edge node 1, a scheduling center obtains a target cache request and returns a target cache instruction to the edge node 1, and the edge node 1 obtains the target resource from an edge node 2 and issues the target resource to the user.
Through the method, when the user acquires the target resource from the edge node and the edge node does not cache the target resource, the edge node can acquire the target resource from other edge nodes which have requested the target resource, without acquiring the target resource from a source station or a middle layer, thereby achieving the effect of reducing the distribution cost of the content distribution network.
For other examples of this embodiment, please refer to the above examples, which are not described herein again.
According to yet another aspect of the embodiments of the present invention, there is also provided an electronic device for implementing the content distribution method, which may include a memory having a computer program stored therein and a processor configured to execute the steps in the content distribution method by the computer program.
According to still another aspect of embodiments of the present invention, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program, when executed by a processor, performs the steps in the content distribution method described above.
Alternatively, in this embodiment, a person skilled in the art may understand that all or part of the steps in the methods of the foregoing embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be substantially or partially implemented in the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, and including instructions for causing one or more computer devices (which may be personal computers, servers, or network devices) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit is merely a division of a logic function, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (15)

1. A content distribution method is applied to a scheduling center and is characterized by comprising the following steps:
acquiring a target cache request sent by a target edge node, wherein the target cache request is used for requesting to cache a target resource in a source station or a middle layer into the target edge node;
and sending a target cache instruction to the target edge node, wherein the target cache instruction is used for instructing the target edge node to acquire the target resource from a first edge node, and the first edge node is an edge node which has requested the target resource in advance.
2. The method of claim 1, wherein prior to obtaining the target cache request sent by the target edge node, the method further comprises:
under the condition of acquiring a cache request for requesting to cache any one resource in the source station or the middle layer, recording node information of the edge node requesting the resource into an access record of the resource.
3. The method of claim 2, wherein after obtaining the target cache request sent by the target edge node, the method further comprises:
under the condition that the access record of the target resource does not include node information of any edge node, sending a first caching instruction to the target edge node, wherein the first caching instruction is used for indicating the target edge node to cache the target resource from the source station or the middle layer;
and recording the node information of the target edge node into the access record of the target resource.
4. The method of claim 2, wherein after obtaining the target cache request sent by the target edge node, the method further comprises:
under the condition that the access record of the target resource does not include node information of any edge node, sending a second cache instruction to a second edge node and sending a third cache instruction to the target edge node, wherein the second cache instruction is used for instructing the second edge node to acquire the target resource from the source station or the middle layer, the third cache instruction is used for instructing the target edge node to acquire the target resource from the second edge node, and the cost of bandwidth consumed by the target edge node for acquiring the target resource from the second edge node is less than a first threshold;
and recording the node information of the second edge node and the node information of the target edge node into the access record of the target resource.
5. The method of claim 2, wherein sending the target cache instruction to the target edge node comprises:
and selecting one edge node from the plurality of edge nodes as the first edge node when the access record of the target resource indicates that the target resource is accessed by the plurality of edge nodes.
6. The method of claim 5, wherein the selecting one edge node from the plurality of edge nodes as the first edge node comprises:
and taking the edge node which is closest to the target edge node in the plurality of edge nodes as the first edge node.
7. The method of claim 5, wherein the selecting one edge node from the plurality of edge nodes as the first edge node comprises:
and taking the edge node with the minimum load in the plurality of edge nodes as the first edge node.
8. The method according to any of claims 1 to 7, wherein the obtaining the target cache request sent by the target edge node comprises:
and acquiring the domain name and the URL address of the target resource and the node information of the target edge node in the target cache request.
9. A content distribution method applied to an edge node is characterized by comprising the following steps:
sending a target cache request to a dispatching center, wherein the target cache request is used for requesting to cache target resources in a source station or a middle layer into a target edge node;
and receiving a target cache instruction returned by the dispatching center, wherein the target cache instruction is used for indicating the target edge node to acquire the target resource from a first edge node, and the first edge node is an edge node which has requested the target resource in advance.
10. The method according to claim 9, wherein the scheduling center is further configured to, in a case where a cache request for caching any one of the resources in the source station or the middle tier is obtained, record node information of an edge node that requests the resource in an access record of the resource, and after the target cache request is sent to the scheduling center, the method further includes:
under the condition that the access record of the target resource does not include node information of any edge node, receiving a first caching instruction sent by the dispatching center, wherein the first caching instruction is used for indicating the target edge node to cache the target resource from the source station or the middle layer;
and responding to the first caching instruction, and requesting the source station or the middle layer to cache the target resource.
11. The method according to claim 9, wherein the scheduling center is further configured to, in a case where a cache request for caching any one of the resources in the source station or the middle tier is obtained, record node information of an edge node that requests the resource in an access record of the resource, and after the target cache request is sent to the scheduling center, the method further includes:
receiving a third cache instruction sent by the scheduling center under the condition that the access record of the target resource does not include node information of any edge node, wherein the third cache instruction is used for instructing the target edge node to acquire the target resource from a second edge node, the cost of bandwidth consumed by the target edge node for acquiring the target resource from the second edge node is less than a first threshold, and the scheduling center is further used for sending a second cache instruction to the second edge node, and the second cache instruction is used for instructing the second edge node to acquire the target resource from the source station or the middle layer;
and responding to the third cache instruction, and acquiring the target resource from the second edge node.
12. A content distribution apparatus applied to a scheduling center, comprising:
an obtaining unit, configured to obtain a target cache request sent by a target edge node, where the target cache request is used to request that a target resource in a source station or a middle layer is cached in the target edge node;
a first sending unit, configured to send a target cache instruction to the target edge node, where the target cache instruction is used to instruct the target edge node to obtain the target resource from a first edge node, and the first edge node is an edge node that has requested the target resource in advance.
13. A content distribution apparatus applied to an edge node, comprising:
the system comprises a sending unit, a scheduling center and a cache unit, wherein the sending unit is used for sending a target cache request to the scheduling center, and the target cache request is used for requesting to cache target resources in a source station or a middle layer into a target edge node;
a first receiving unit, configured to receive a target cache instruction returned by the scheduling center, where the target cache instruction is used to instruct the target edge node to acquire the target resource from a first edge node, and the first edge node is an edge node that has requested the target resource in advance.
14. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 8 or 9 to 11.
15. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of any of claims 1 to 8 or 9 to 11 by means of the computer program.
CN202111152471.0A 2021-09-29 2021-09-29 Content distribution method, content distribution device, storage medium and electronic equipment Active CN113873302B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111152471.0A CN113873302B (en) 2021-09-29 2021-09-29 Content distribution method, content distribution device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111152471.0A CN113873302B (en) 2021-09-29 2021-09-29 Content distribution method, content distribution device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113873302A true CN113873302A (en) 2021-12-31
CN113873302B CN113873302B (en) 2024-04-26

Family

ID=79000535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111152471.0A Active CN113873302B (en) 2021-09-29 2021-09-29 Content distribution method, content distribution device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113873302B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114466018A (en) * 2022-03-22 2022-05-10 北京有竹居网络技术有限公司 Scheduling method and device for content distribution network, storage medium and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104717231A (en) * 2014-12-18 2015-06-17 北京蓝汛通信技术有限责任公司 Pre-distribution processing method and device of content distribution network
CN111263171A (en) * 2020-02-25 2020-06-09 北京达佳互联信息技术有限公司 Live streaming media data acquisition method and edge node area networking system
CN111770119A (en) * 2020-09-03 2020-10-13 云盾智慧安全科技有限公司 Website resource acquisition method, system, device and computer storage medium
CN112153160A (en) * 2020-09-30 2020-12-29 北京金山云网络技术有限公司 Access request processing method and device and electronic equipment
CN112688980A (en) * 2019-10-18 2021-04-20 上海哔哩哔哩科技有限公司 Resource distribution method and device, and computer equipment
WO2021135835A1 (en) * 2019-12-31 2021-07-08 北京金山云网络技术有限公司 Resource acquisition method and apparatus, and node device in cdn network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104717231A (en) * 2014-12-18 2015-06-17 北京蓝汛通信技术有限责任公司 Pre-distribution processing method and device of content distribution network
CN112688980A (en) * 2019-10-18 2021-04-20 上海哔哩哔哩科技有限公司 Resource distribution method and device, and computer equipment
WO2021135835A1 (en) * 2019-12-31 2021-07-08 北京金山云网络技术有限公司 Resource acquisition method and apparatus, and node device in cdn network
CN111263171A (en) * 2020-02-25 2020-06-09 北京达佳互联信息技术有限公司 Live streaming media data acquisition method and edge node area networking system
CN111770119A (en) * 2020-09-03 2020-10-13 云盾智慧安全科技有限公司 Website resource acquisition method, system, device and computer storage medium
CN112153160A (en) * 2020-09-30 2020-12-29 北京金山云网络技术有限公司 Access request processing method and device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114466018A (en) * 2022-03-22 2022-05-10 北京有竹居网络技术有限公司 Scheduling method and device for content distribution network, storage medium and electronic equipment
WO2023179505A1 (en) * 2022-03-22 2023-09-28 北京有竹居网络技术有限公司 Scheduling method and apparatus for content delivery network, and storage medium and electronic device

Also Published As

Publication number Publication date
CN113873302B (en) 2024-04-26

Similar Documents

Publication Publication Date Title
US10218806B2 (en) Handling long-tail content in a content delivery network (CDN)
US11032387B2 (en) Handling of content in a content delivery network
US11089129B2 (en) Accelerated network delivery of channelized content
AU2011274249B2 (en) Systems and methods for storing digital content
CN106031130A (en) Content delivery network architecture with edge proxy
CN107835437B (en) Dispatching method based on more cache servers and device
WO2012105967A1 (en) Asset management architecture for content delivery networks
US20230239376A1 (en) Request processing in a content delivery framework
CN109873855A (en) A kind of resource acquiring method and system based on block chain network
CN113873302B (en) Content distribution method, content distribution device, storage medium and electronic equipment
US10924573B2 (en) Handling long-tail content in a content delivery network (CDN)
CN109716731A (en) For providing the system and method for functions reliably and efficiently data transmission
KR20050060783A (en) Method for retrieving and downloading digital media files through network and medium on which the program for executing the method is recorded
CN115277851A (en) Service request processing method and system
KR20150011087A (en) Distributed caching management method for contents delivery network service and apparatus therefor
CN103609074A (en) Application specific WEB request routing
US8301775B2 (en) Affiliate bandwidth management
KR20150010415A (en) Contents delivery network service method and broker apparatus for distributed caching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant