CN113873302B - Content distribution method, content distribution device, storage medium and electronic equipment - Google Patents

Content distribution method, content distribution device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113873302B
CN113873302B CN202111152471.0A CN202111152471A CN113873302B CN 113873302 B CN113873302 B CN 113873302B CN 202111152471 A CN202111152471 A CN 202111152471A CN 113873302 B CN113873302 B CN 113873302B
Authority
CN
China
Prior art keywords
target
edge node
resource
node
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111152471.0A
Other languages
Chinese (zh)
Other versions
CN113873302A (en
Inventor
刘贵荣
杨泽森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingsoft Cloud Network Technology Co Ltd
Original Assignee
Beijing Kingsoft Cloud Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Cloud Network Technology Co Ltd filed Critical Beijing Kingsoft Cloud Network Technology Co Ltd
Priority to CN202111152471.0A priority Critical patent/CN113873302B/en
Publication of CN113873302A publication Critical patent/CN113873302A/en
Application granted granted Critical
Publication of CN113873302B publication Critical patent/CN113873302B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26208Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists the scheduling operation being performed under constraints
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a content distribution method, a content distribution device, a storage medium and electronic equipment. The method comprises the following steps: acquiring a target cache request sent by a target edge node, wherein the target cache request is used for requesting to cache target resources in a source station or a middle layer into the target edge node; and sending a target cache instruction to the target edge node, wherein the target cache instruction is used for indicating the target edge node to acquire target resources from a first edge node, and the first edge node is the edge node which has requested the target resources in advance. The invention solves the technical problem of high distribution cost in the content distribution process.

Description

Content distribution method, content distribution device, storage medium and electronic equipment
Technical Field
The present invention relates to the field of intelligent devices, and in particular, to a content distribution method, a device, a storage medium, and an electronic device.
Background
In the prior art, in the process of content distribution, if a user acquires a target resource from an edge node, and the edge node does not cache the target resource, the edge node needs to acquire the target resource from a source station or a middle layer and then issue the target resource to the user. And if all resources are acquired to the source station or middle layer, the bandwidth distribution cost is increased.
Disclosure of Invention
The embodiment of the invention provides a content distribution method, a content distribution device, a storage medium and electronic equipment, which are used for at least solving the technical problem of high distribution cost in the content distribution process.
According to an aspect of an embodiment of the present invention, there is provided a content distribution method, applied to a scheduling center, including: acquiring a target cache request sent by a target edge node, wherein the target cache request is used for requesting to cache target resources in a source station or a middle layer into the target edge node; and sending a target cache instruction to the target edge node, wherein the target cache instruction is used for instructing the target edge node to acquire the target resource from a first edge node, and the first edge node is an edge node which has previously requested the target resource.
According to another aspect of the embodiment of the present invention, there is also provided a content distribution method, applied to an edge node, including: sending a target cache request to a dispatching center, wherein the target cache request is used for requesting to cache target resources in a source station or a middle layer into a target edge node; and receiving a target cache instruction returned by the dispatching center, wherein the target cache instruction is used for instructing the target edge node to acquire the target resource from a first edge node, and the first edge node is an edge node which has previously requested the target resource.
According to still another aspect of the embodiments of the present invention, there is provided a content distribution apparatus applied to a scheduling center, including: the device comprises an acquisition unit, a target edge node and a storage unit, wherein the acquisition unit is used for acquiring a target cache request sent by the target edge node, and the target cache request is used for requesting to cache target resources in a source station or a middle layer into the target edge node; and the first sending unit is used for sending a target cache instruction to the target edge node, wherein the target cache instruction is used for instructing the target edge node to acquire the target resource from a first edge node, and the first edge node is an edge node which has previously requested the target resource.
As an alternative example, the above apparatus further includes: a first recording unit, configured to record node information of an edge node requesting for the resource into an access record of the resource when a cache request for caching the resource is acquired before the target cache request sent by the target edge node.
As an alternative example, the above apparatus further includes: a second sending unit, configured to send a first cache instruction to a target edge node when node information of any one edge node is not included in an access record of the target resource after a target cache request sent by the target edge node is obtained, where the first cache instruction is used to instruct the target edge node to cache the target resource from the source station or a middle layer; and the second recording unit is used for recording the node information of the target edge node into the access record of the target resource.
As an alternative example, the above apparatus further includes: a third sending unit, configured to send a second cache instruction to a second edge node and send a third cache instruction to the target edge node when node information of any one edge node is not included in an access record of the target resource after a target cache request sent by the target edge node is acquired, where the second cache instruction is used to instruct the second edge node to acquire the target resource from the source station or a middle layer, and the third cache instruction is used to instruct the target edge node to acquire the target resource from the second edge node, and a cost of a bandwidth consumed by the target edge node to acquire the target resource from the second edge node is less than a first threshold; and a third recording unit, configured to record node information of the second edge node and node information of the target edge node into an access record of the target resource.
As an optional example, the first transmitting unit includes: and the selection module is used for selecting one edge node from the plurality of edge nodes as the first edge node when the access record of the target resource indicates that the plurality of edge nodes access the target resource.
As an alternative example, the selecting module includes: and the first determining submodule is used for taking the edge node closest to the target edge node from the plurality of edge nodes as the first edge node.
As an alternative example, the selecting module includes: and the second determining submodule is used for taking the edge node with the smallest load of the edge nodes as the first edge node.
As an alternative example, the above-described acquisition unit includes: and the acquisition module is used for acquiring the domain name, the URL address and the node information of the target edge node of the target resource in the target cache request.
According to still another aspect of the embodiments of the present invention, there is provided a content distribution apparatus applied to an edge node, including: a sending unit, configured to send a target cache request to a scheduling center, where the target cache request is used to request to cache a target resource in a source station or a middle layer into a target edge node; and the first receiving unit is used for receiving a target cache instruction returned by the dispatching center, wherein the target cache instruction is used for instructing the target edge node to acquire the target resource from a first edge node, and the first edge node is an edge node which has previously requested the target resource.
As an optional example, the scheduling center is further configured to record, in a case where a cache request for caching any one of the source station or the middle layer is acquired, node information of an edge node that requests the resource into an access record of the resource, where the apparatus further includes: a second receiving unit, configured to receive a first cache instruction sent by a dispatch center when node information of any one edge node is not included in an access record of the target resource after sending a target cache request to the dispatch center, where the first cache instruction is used to instruct the target edge node to cache the target resource from the source station or a middle layer; and the first caching unit is used for responding to the first caching instruction and requesting to cache the target resource from the source station or the middle layer.
As an optional example, the scheduling center is further configured to record, in a case where a cache request for caching any one of the source station or the middle layer is acquired, node information of an edge node that requests the resource into an access record of the resource, where the apparatus further includes: a third receiving unit, configured to receive, after sending a target cache request to a scheduling center, a third cache instruction sent by the scheduling center when node information of any one edge node is not included in an access record of the target resource, where the third cache instruction is used to instruct the target edge node to acquire the target resource from a second edge node, a cost of a bandwidth consumed by the target edge node to acquire the target resource from the second edge node is less than a first threshold, and the scheduling center is further configured to send a second cache instruction to the second edge node, where the second cache instruction is used to instruct the second edge node to acquire the target resource from the source station or a middle layer; and the second caching unit is used for responding to the third caching instruction and acquiring the target resource from the second edge node.
According to still another aspect of the embodiments of the present invention, there is also provided a storage medium having stored therein a computer program, wherein the computer program is configured to execute the above-described content distribution method at runtime.
According to still another aspect of the embodiments of the present invention, there is also provided an electronic device including a memory in which a computer program is stored, and a processor configured to execute the content distribution method described above by the computer program.
In the embodiment of the invention, a target cache request sent by a target edge node is acquired, wherein the target cache request is used for requesting to cache target resources in a source station or a middle layer into the target edge node; and sending a target cache instruction to the target edge node, wherein the target cache instruction is used for instructing the target edge node to acquire the target resource from a first edge node, and the first edge node is a method for requesting the target resource in advance.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a flow chart of an alternative content distribution method according to an embodiment of the present invention;
fig. 2 is a content distribution architecture diagram of an alternative content distribution method according to an embodiment of the present invention;
Fig. 3 is a content distribution architecture diagram of another alternative content distribution method according to an embodiment of the present invention;
FIG. 4 is a schematic view of an access record of an alternative content distribution method according to an embodiment of the present invention;
FIG. 5 is a system flow diagram of an alternative content distribution method according to an embodiment of the present invention;
FIG. 6 is a system flow diagram of another alternative content distribution method according to an embodiment of the present invention;
FIG. 7 is a flow chart of another alternative content distribution method according to an embodiment of the present invention;
FIG. 8 is a flow chart of yet another alternative content distribution method according to an embodiment of the present invention;
fig. 9 is a schematic structural view of an alternative content distribution apparatus according to an embodiment of the present invention;
fig. 10 is a schematic structural view of another alternative content distribution apparatus according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to a first aspect of an embodiment of the present invention, there is provided a content distribution method applied to a dispatch center, optionally, as shown in fig. 1, the method includes:
s102, acquiring a target cache request sent by a target edge node, wherein the target cache request is used for requesting to cache target resources in a source station or a middle layer into the target edge node;
S104, sending a target cache instruction to a target edge node, wherein the target cache instruction is used for indicating the target edge node to acquire target resources from a first edge node, and the first edge node is the edge node which has requested the target resources in advance.
Optionally, in this embodiment, the content delivery network (Content Delivery Network, abbreviated as CDN). The CDN is a content delivery network constructed on the network, and by means of the edge servers (edge nodes) deployed in various places, a user can obtain required content nearby through load balancing, content delivery, scheduling and other functional modules of the center platform, network congestion is reduced, and user access response speed and hit rate are improved.
And the global dispatching system of the CDN dispatches the access request of the user to the CDN edge node. After the user accesses the CDN edge node, if the current access resource exists on the node, the node directly returns the content to the user, if the access resource is not on the node, the edge node can pull the content by the source station, the content is returned to the user, the memory is cached on the edge node, and the next access to the resource can directly return. To reduce stress on the source station, multiple cache layers may be provided. The purposes of multi-level caching and source-back convergence are achieved. As shown in fig. 2, fig. 2 is a schematic diagram of an alternative CDN. In fig. 2, the user acquires the target resource from the edge node 1, and the edge node 1 does not cache the target resource, so the edge node 1 can acquire the target resource from the middle layer 1. If the middle layer 1 does not have the target resource, the target resource is acquired from the source station, and finally the target resource is issued to the user.
Optionally, in this embodiment, the target edge node may be any edge node of the CDN edge nodes. After receiving the request of the user for the target resource, the target edge node needs to acquire the target resource if the target edge node does not have the target resource. In this embodiment, the target edge node may send a target cache request to the dispatch center, and the dispatch center may determine which edge nodes have requested the target resource after obtaining the target cache request sent by the target edge node. And then, node information of the first edge node which requests the target resource is issued to the target edge node, and the target edge node acquires the target resource from the first edge node.
Alternatively, in this embodiment, the type of the target resource is not limited. May be any one or a combination of any number of video, audio, pictures, text, files, etc.
Alternatively, the edge nodes in this embodiment may be disposed at different positions as needed. The edge node may upload and download data with the source or middle layer. The user may obtain the required data by accessing the nearby edge node. The source or middle tier or edge node may be a server.
As shown in fig. 3, when a user requests a target resource from an edge node 1, a scheduling center acquires the target cache request, returns a target cache instruction to the edge node 1, and the edge node 1 acquires the target resource from an edge node 2 and issues the target resource to the user.
According to the method, when the user obtains the target resource from the edge node and the edge node does not cache the target resource, the edge node can obtain the target resource from other edge nodes which have requested the target resource, and the target resource does not need to be obtained from a source station or a middle layer, so that the effect of reducing the distribution cost of the content distribution network is achieved.
As an optional implementation manner, before acquiring the target cache request sent by the target edge node, the method further includes:
when a cache request requesting a resource from either the source cache station or the middle layer is acquired, node information of an edge node requesting the resource is recorded in an access record of the resource.
Alternatively, in this embodiment, the dispatch center may record an access record of the edge node accessing each resource. If any edge node accesses a resource of the source station or the middle layer, the record of the access of the edge node to the resource is correspondingly recorded.
Alternatively, in this embodiment, an access record table may be established for the accessed resource, and node information of the edge node accessing the resource may be recorded in the access record table. The node information may be a node unique identifier of the edge node.
For example, as shown in fig. 4, fig. 4 is an alternative access record for an edge node. After the target resource 1 in the source station or the middle layer is accessed by the edge nodes 1 to 3, the accessed access record of the target resource 1 is recorded in the dispatching center. And the edge node 3 requests the target resource 2, records the accessed access record of the target resource 2 in the dispatch center. FIG. 4 is intended as an example, and not as an illustration of the limitation of access records for the target resource of the present application.
As an optional implementation manner, after obtaining the target cache request sent by the target edge node, the method further includes:
Under the condition that the access record of the target resource does not comprise node information of any edge node, a first cache instruction is sent to the target edge node, wherein the first cache instruction is used for indicating the target edge node to cache the target resource from a source station or a middle layer;
and recording node information of the target edge node into an access record of the target resource.
Alternatively, in this embodiment, the target edge node may obtain the target resource from the first edge node that requests the target resource. However, if the target resource is not requested by any edge node, the dispatch center cannot return the first edge node to the target edge node. Because none of the edge nodes can act as first edge nodes. At this point, the dispatch center may return a first cache instruction to the target edge node. And the target edge node acquires the target resource from the source station or the middle layer under the condition of receiving the first cache instruction. The flow chart is shown in steps S502 to S512 of fig. 5. And the user requests the target resource from the edge node, if the edge node does not have the target resource, the edge node sends a target cache request to the dispatching center. And the dispatch center determines that none of the edge nodes have access to the target resource. Thus, a first cache instruction is returned to the edge node, the edge node obtains the target resource from the source station or the middle layer, and the target resource is returned to the user. The dispatch center may save node information for the edge node to an access record for the target resource.
As an optional implementation manner, after obtaining the target cache request sent by the target edge node, the method further includes:
Under the condition that node information of any edge node is not included in the access record of the target resource, sending a second cache instruction to a second edge node and sending a third cache instruction to the target edge node, wherein the second cache instruction is used for instructing the second edge node to acquire the target resource from a source station or a middle layer, the third cache instruction is used for instructing the target edge node to acquire the target resource from the second edge node, and the cost of bandwidth consumed by the target edge node to acquire the target resource from the second edge node is smaller than a first threshold value;
And recording the node information of the second edge node and the node information of the target edge node into an access record of the target resource.
Alternatively, in this embodiment, the target edge node may obtain the target resource from the first edge node that requests the target resource. However, if the target resource is not requested by any edge node, the dispatch center cannot return the first edge node to the target edge node. Because none of the edge nodes can act as first edge nodes. At this time, the scheduling center may select one edge node from all the edge nodes as the second edge node, return the node information of the second edge node to the target edge node, and obtain the target resource from the target edge node to the second edge node. Of course, the second edge node may not store the target resource itself because it has not accessed the target resource. The second edge node needs to acquire the target resource from the second edge node after the second edge node acquires the target resource from the source station or the middle layer. The scheduling center records node information of the second edge node and node information of the target edge node into an access record of the target resource. The flowchart may be as shown in steps S602 to S616 of fig. 6. The user requests the target resource from the target edge node, the target edge node does not cache the target resource, the target edge node sends a target cache request to the dispatching center, and the dispatching center determines that any edge node does not access the target resource. The dispatch center sends a third cache instruction to the target edge node and a second cache instruction to the second edge node. The second edge node is an edge node determined by the dispatch center. The cost of bandwidth consumed by the target edge node to acquire the target resource from the second edge node is less than the first threshold. The second edge node acquires the target resource from the source station or the middle layer, then the target edge node acquires the target resource from the second edge node, and the target resource is returned to the user.
As an alternative embodiment, sending the target cache instruction to the target edge node includes:
and selecting one edge node from the plurality of edge nodes as a first edge node when the access record of the target resource indicates that the plurality of edge nodes access the target resource.
Alternatively, in this embodiment, the target edge node may obtain the target resource from the first edge node that requests the target resource. And if there are a plurality of edge nodes requesting the target resource, the scheduling center may select one edge node from the plurality of edge nodes requesting the target resource as the first edge node.
As an alternative embodiment, selecting an edge node from a plurality of edge nodes as the first edge node includes:
and taking the edge node closest to the target edge node among the plurality of edge nodes as a first edge node.
Optionally, in this embodiment, when there are a plurality of edge nodes requesting for the target resource, the scheduling center selects one edge node as the target edge node, a distance between each edge node requesting for the target resource and the target edge node may be obtained, and the edge node with the smallest distance is used as the first edge node, and the target edge node obtains the target resource from the first edge node.
As an alternative embodiment, selecting an edge node from a plurality of edge nodes as the first edge node includes:
and taking the edge node with the smallest load of the plurality of edge nodes as a first edge node.
Optionally, in this embodiment, when the scheduling center selects one edge node as the target edge node in the case where there are a plurality of edge nodes that request for the target resource, the current load condition of each edge node that requests for the target resource may be obtained, and the greater the load, the more busy the edge node is indicated. And taking the edge node with the minimum load as a first edge node, and acquiring target resources from the target edge node to the first edge node.
As an optional implementation manner, obtaining the target cache request sent by the target edge node includes:
and acquiring the domain name, the URL address and the node information of the target edge node of the target resource in the target cache request.
Optionally, in this embodiment, the target edge node carries the domain name and URL address of the target resource in the target cache request, and the scheduling center determines the target resource to be requested by the target edge node according to the domain name and URL address. And recording the node information of the target edge node into an access record for accessing the target resource.
The architecture of this embodiment is shown in fig. 3, where the request of this embodiment and other source returning requests occur when the resource does not hit the target resource, and the user requests the target resource from the edge node, and the edge node queries the target resource, and returns the target resource directly when the resource hits. If the edge node does not have the target resource, when a request accesses the edge node, a service program on the node initiates a query request to a dispatching center, and the request carries key information such as a domain name, an access URL, a request node name and the like. The scheduling center returns 2 states according to the request information, one is 302 replies pointing to other nodes, and 302 in the embodiment refers to a special state in the http protocol, so as to realize the jump function. The other is the response of the non-found resource information without the found resource information, and after the service program receives the two responses, different behaviors are respectively made according to different response status codes. Or in response 302, to obtain content from other nodes, such as the 4-way of fig. 3. Or the middle layer or the source station is returned to acquire the content according to the back source path.
The dispatch center records access information of all resources and stores search and response. When a request accesses the dispatch center, the dispatch center records the request regardless of whether the resource is accessed by other nodes, and considers that the node must cache the content of the resource, because the node obtains the content to be cached and responds to the user regardless of the status code returned by the access. This is also the source of the data for the dispatch center, which can be considered to be request triggered, the more different requests, the more data stored.
Fig. 7 is a flow of processing by the dispatch center, when a request comes, it is checked whether the request parameter is legal, if not, it returns a code indicating that the request parameter is illegal. If the request has no problem, the database is accessed according to the requested resource, whether the resource is accessed is confirmed, if so, one node is selected from the accessed nodes according to the algorithm, and the node is packaged into 302 response and returned to the requester. If the resource has no node access, the resource information response code is directly returned. Whether a 302 response is returned or a resource information response is not found, it is necessary to do so by appending the node accessed this time to the access list for this resource so that the next request can request content from the accessed node.
For example, when the dispatch center receives a request, the resource is http:// abc.com/1.Txt, the request node is nncm01, and because of the first access, the list of access nodes in the dispatch center for the resource is empty, so the dispatch center returns the XXX code directly, while placing node nncm01 into the access list for the resource. Then, a request is still accessed, http:// abc.com/1.Txt is still accessed, the request node is nncm, the dispatching center knows that nncm01 has been accessed before according to the access list of the resource, and only one node is currently accessed, so nncm node information is directly assembled into 302 response http:/{ nncm01 ip }/abc.com/1.Txt, and the response http }/abc.com/1.Txt is returned to nncm02. Thus nncm02 will obtain content from 302 response to nncm01 instead of going to the middle layer to obtain content. At the same time, the dispatch center will add nncm02 to the access list of http:// abc.com/1. Txt. At this time, there are nncm, nncm and two nodes in the access list of the resource. If there are more nodes to access the resource, the dispatch center returns nncm, nncm02 one of the two nodes to the new node while adding the new node to the access list.
Bandwidth benefit description, assuming a requested bandwidth of 1M, when going back to the middle layer, a bandwidth of 1M is generated, and the paid bandwidth is 1M. With the present invention, this request is transferred to the other edge node, i.e. the edge node bandwidth is increased by 1M, but the middle layer bandwidth is reduced by 1M. If the paid bandwidth of the edge bandwidth is 100M (payment by 100M is required regardless of the fact that the paid bandwidth is less than 100M), if the added 1M does not make the paid bandwidth exceed 100M, the transfer benefit is 100%, if the charging bandwidth is added by 0.5M, that is, the edge paid bandwidth is 100.5M, the benefit is (1-0.5/1) =50%, and in the extreme case, the paid bandwidth is added by 1M to 101M, and the transfer benefit is 0. In practice, the edge paid bandwidth is mostly under run, so in most cases, the bandwidth benefit is positive. In addition, by adding a strategy control means, the space of the income can be enlarged.
Because the dispatching center can control the return 302 response, the embodiment can also actively return the node information (such as a metropolitan area network and other non-mainstream nodes) of the second edge node with low cost when the dispatching center returns the second edge node to the target edge node under the condition that the dispatching center determines that no edge node requests the target resource. Actively directing the request to a low cost node, the second edge node obtains the target resource from the source station or the middle layer, and the target edge node obtains the target resource from the second edge node. The method combines bandwidth adjustment strategies, node selection algorithms and other means to flexibly control the trend of requests, and achieves the purposes of reducing cost, improving the utilization rate of non-mainstream nodes and the like.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present invention. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present invention.
According to another aspect of the embodiment of the present application, there is further provided a content distribution method applied to an edge node, as shown in fig. 8, including:
S802, sending a target cache request to a dispatching center, wherein the target cache request is used for requesting to cache target resources in a source station or a middle layer into a target edge node;
S804, receiving a target cache instruction returned by the dispatching center, wherein the target cache instruction is used for indicating a target edge node to acquire target resources from a first edge node, and the first edge node is the edge node which has requested the target resources in advance.
Optionally, in this embodiment, the content delivery network (Content Delivery Network, abbreviated as CDN). The CDN is a content delivery network constructed on the network, and by means of the edge servers (edge nodes) deployed in various places, a user can obtain required content nearby through load balancing, content delivery, scheduling and other functional modules of the center platform, network congestion is reduced, and user access response speed and hit rate are improved.
And the global dispatching system of the CDN dispatches the access request of the user to the CDN edge node. After the user accesses the CDN edge node, if the current access resource exists on the node, the node directly returns the content to the user, if the access resource is not on the node, the edge node can pull the content by the source station, the content is returned to the user, the memory is cached on the edge node, and the next access to the resource can directly return. To reduce stress on the source station, multiple cache layers may be provided. The purposes of multi-level caching and source-back convergence are achieved. As shown in fig. 2, fig. 2 is a schematic diagram of an alternative CDN. In fig. 2, the user acquires the target resource from the edge node 1, and the edge node 1 does not cache the target resource, so the edge node 1 can acquire the target resource from the middle layer 1. If the middle layer 1 does not have the target resource, the target resource is acquired from the source station, and finally the target resource is issued to the user.
Optionally, in this embodiment, the target edge node may be any edge node of the CDN edge nodes. After receiving the request of the user for the target resource, the target edge node needs to acquire the target resource if the target edge node does not have the target resource. In this embodiment, the target edge node may send a target cache request to the dispatch center, and the dispatch center may determine which edge nodes have requested the target resource after obtaining the target cache request sent by the target edge node. And then, node information of the first edge node which requests the target resource is issued to the target edge node, and the target edge node acquires the target resource from the first edge node.
Alternatively, in this embodiment, the type of the target resource is not limited. May be any one or a combination of any number of video, audio, pictures, text, files, etc.
Alternatively, the edge nodes in this embodiment may be disposed at different positions as needed. The edge node may upload and download data with the source or middle layer. The user may obtain the required data by accessing the nearby edge node. The source or middle tier or edge node may be a server.
As shown in fig. 3, when a user requests a target resource from an edge node 1, a scheduling center acquires the target cache request, returns a target cache instruction to the edge node 1, and the edge node 1 acquires the target resource from an edge node 2 and issues the target resource to the user.
According to the method, when the user obtains the target resource from the edge node and the edge node does not cache the target resource, the edge node can obtain the target resource from other edge nodes which have requested the target resource, and the target resource does not need to be obtained from a source station or a middle layer, so that the effect of reducing the distribution cost of the content distribution network is achieved.
For other examples of this embodiment, please refer to the above examples, and the description thereof is omitted.
According to still another aspect of the embodiment of the present application, there is also provided a content distribution apparatus applied to a dispatch center, as shown in fig. 9, including:
An obtaining unit 902, configured to obtain a target cache request sent by a target edge node, where the target cache request is used to request to cache a target resource in a source station or a middle layer into the target edge node;
The first sending unit 904 is configured to send a target cache instruction to a target edge node, where the target cache instruction is configured to instruct the target edge node to obtain a target resource from a first edge node, and the first edge node is an edge node that requests the target resource in advance.
Optionally, in this embodiment, the content delivery network (Content Delivery Network, abbreviated as CDN). The CDN is a content delivery network constructed on the network, and by means of the edge servers (edge nodes) deployed in various places, a user can obtain required content nearby through load balancing, content delivery, scheduling and other functional modules of the center platform, network congestion is reduced, and user access response speed and hit rate are improved.
And the global dispatching system of the CDN dispatches the access request of the user to the CDN edge node. After the user accesses the CDN edge node, if the current access resource exists on the node, the node directly returns the content to the user, if the access resource is not on the node, the edge node can pull the content by the source station, the content is returned to the user, the memory is cached on the edge node, and the next access to the resource can directly return. To reduce stress on the source station, multiple cache layers may be provided. The purposes of multi-level caching and source-back convergence are achieved. As shown in fig. 2, fig. 2 is a schematic diagram of an alternative CDN. In fig. 2, the user acquires the target resource from the edge node 1, and the edge node 1 does not cache the target resource, so the edge node 1 can acquire the target resource from the middle layer 1. If the middle layer 1 does not have the target resource, the target resource is acquired from the source station, and finally the target resource is issued to the user.
Optionally, in this embodiment, the target edge node may be any edge node of the CDN edge nodes. After receiving the request of the user for the target resource, the target edge node needs to acquire the target resource if the target edge node does not have the target resource. In this embodiment, the target edge node may send a target cache request to the dispatch center, and the dispatch center may determine which edge nodes have requested the target resource after obtaining the target cache request sent by the target edge node. And then, node information of the first edge node which requests the target resource is issued to the target edge node, and the target edge node acquires the target resource from the first edge node.
Alternatively, in this embodiment, the type of the target resource is not limited. May be any one or a combination of any number of video, audio, pictures, text, files, etc.
Alternatively, the edge nodes in this embodiment may be disposed at different positions as needed. The edge node may upload and download data with the source or middle layer. The user may obtain the required data by accessing the nearby edge node. The source or middle tier or edge node may be a server.
As shown in fig. 3, when a user requests a target resource from an edge node 1, a scheduling center acquires the target cache request, returns a target cache instruction to the edge node 1, and the edge node 1 acquires the target resource from an edge node 2 and issues the target resource to the user.
According to the method, when the user obtains the target resource from the edge node and the edge node does not cache the target resource, the edge node can obtain the target resource from other edge nodes which have requested the target resource, and the target resource does not need to be obtained from a source station or a middle layer, so that the effect of reducing the distribution cost of the content distribution network is achieved.
For other examples of this embodiment, please refer to the above examples, and the description thereof is omitted.
According to still another aspect of the embodiment of the present application, there is also provided a content distribution apparatus applied to an edge node, as shown in fig. 10, including:
A sending unit 1002, configured to send a target cache request to a dispatch center, where the target cache request is used to request to cache a target resource in a source station or a middle layer into a target edge node;
The first receiving unit 1004 is configured to receive a target cache instruction returned by the scheduling center, where the target cache instruction is configured to instruct a target edge node to obtain a target resource from a first edge node, and the first edge node is an edge node that requests the target resource in advance.
Optionally, in this embodiment, the content delivery network (Content Delivery Network, abbreviated as CDN). The CDN is a content delivery network constructed on the network, and by means of the edge servers (edge nodes) deployed in various places, a user can obtain required content nearby through load balancing, content delivery, scheduling and other functional modules of the center platform, network congestion is reduced, and user access response speed and hit rate are improved.
And the global dispatching system of the CDN dispatches the access request of the user to the CDN edge node. After the user accesses the CDN edge node, if the current access resource exists on the node, the node directly returns the content to the user, if the access resource is not on the node, the edge node can pull the content by the source station, the content is returned to the user, the memory is cached on the edge node, and the next access to the resource can directly return. To reduce stress on the source station, multiple cache layers may be provided. The purposes of multi-level caching and source-back convergence are achieved. As shown in fig. 2, fig. 2 is a schematic diagram of an alternative CDN. In fig. 2, the user acquires the target resource from the edge node 1, and the edge node 1 does not cache the target resource, so the edge node 1 can acquire the target resource from the middle layer 1. If the middle layer 1 does not have the target resource, the target resource is acquired from the source station, and finally the target resource is issued to the user.
Optionally, in this embodiment, the target edge node may be any edge node of the CDN edge nodes. After receiving the request of the user for the target resource, the target edge node needs to acquire the target resource if the target edge node does not have the target resource. In this embodiment, the target edge node may send a target cache request to the dispatch center, and the dispatch center may determine which edge nodes have requested the target resource after obtaining the target cache request sent by the target edge node. And then, node information of the first edge node which requests the target resource is issued to the target edge node, and the target edge node acquires the target resource from the first edge node.
Alternatively, in this embodiment, the type of the target resource is not limited. May be any one or a combination of any number of video, audio, pictures, text, files, etc.
Alternatively, the edge nodes in this embodiment may be disposed at different positions as needed. The edge node may upload and download data with the source or middle layer. The user may obtain the required data by accessing the nearby edge node. The source or middle tier or edge node may be a server.
As shown in fig. 3, when a user requests a target resource from an edge node 1, a scheduling center acquires the target cache request, returns a target cache instruction to the edge node 1, and the edge node 1 acquires the target resource from an edge node 2 and issues the target resource to the user.
According to the method, when the user obtains the target resource from the edge node and the edge node does not cache the target resource, the edge node can obtain the target resource from other edge nodes which have requested the target resource, and the target resource does not need to be obtained from a source station or a middle layer, so that the effect of reducing the distribution cost of the content distribution network is achieved.
For other examples of this embodiment, please refer to the above examples, and the description thereof is omitted.
According to a further aspect of embodiments of the present invention, there is also provided an electronic device for implementing the above-described content distribution method, the electronic device may include a memory in which a computer program is stored, and a processor configured to execute the steps in the above-described content distribution method by the computer program.
According to yet another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program when executed by a processor performs the steps in the content distribution method described above.
Alternatively, in this embodiment, it will be understood by those skilled in the art that all or part of the steps in the methods of the above embodiments may be performed by a program for instructing a terminal device to execute the steps, where the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
The integrated units in the above embodiments may be stored in the above-described computer-readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present invention may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing one or more computer devices (which may be personal computers, servers or network devices, etc.) to perform all or part of the steps of the method of the various embodiments of the present invention.
In the foregoing embodiments of the present invention, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In several embodiments provided by the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and are merely a logical functional division, and there may be other manners of dividing the apparatus in actual implementation, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (10)

1. A content distribution method applied to a dispatch center, comprising:
Acquiring a target cache request sent by a target edge node, wherein the target cache request is used for requesting to cache target resources in a source station or a middle layer into the target edge node;
sending a target cache instruction to the target edge node, wherein the target cache instruction is used for indicating the target edge node to acquire the target resource from a first edge node, and the first edge node is an edge node which has previously requested the target resource;
Before the target cache request sent by the target edge node is acquired, the method further comprises the following steps: under the condition that a cache request for caching any resource in the source station or the middle layer is acquired, node information of an edge node requesting the resource is recorded into an access record of the resource;
After obtaining the target cache request sent by the target edge node, the method further includes: sending a second cache instruction to a second edge node and sending a third cache instruction to the target edge node under the condition that node information of any edge node is not included in the access record of the target resource, wherein the second edge node is a node with low bandwidth cost, the second cache instruction is used for instructing the second edge node to acquire the target resource from the source station or the middle layer, the third cache instruction is used for instructing the target edge node to acquire the target resource from the second edge node, and the cost of bandwidth consumed by the target edge node to acquire the target resource from the second edge node is smaller than a first threshold; and recording the node information of the second edge node and the node information of the target edge node into the access record of the target resource.
2. The method of claim 1, wherein the sending the target cache instruction to the target edge node comprises:
and selecting one edge node from the plurality of edge nodes as the first edge node when the access record of the target resource indicates that a plurality of edge nodes access the target resource.
3. The method of claim 2, wherein the selecting one edge node from the plurality of edge nodes as the first edge node comprises:
And taking the edge node closest to the target edge node from the plurality of edge nodes as the first edge node.
4. The method of claim 2, wherein the selecting one edge node from the plurality of edge nodes as the first edge node comprises:
And taking the edge node with the smallest load of the plurality of edge nodes as the first edge node.
5. The method according to any one of claims 1 to 4, wherein the obtaining the target cache request sent by the target edge node includes:
and acquiring the domain name, the URL address and the node information of the target edge node of the target resource in the target cache request.
6. A content distribution method applied to an edge node, comprising:
Sending a target cache request to a dispatching center, wherein the target cache request is used for requesting to cache target resources in a source station or a middle layer into a target edge node;
Receiving a target cache instruction returned by the dispatching center, wherein the target cache instruction is used for indicating the target edge node to acquire the target resource from a first edge node, and the first edge node is an edge node which has previously requested the target resource;
The method further comprises the steps of when the scheduling center obtains a cache request for caching any resource in the source station or the middle layer, recording node information of an edge node requesting the resource into an access record of the resource, and after the target cache request is sent to the scheduling center, the method further comprises the steps of: receiving a third cache instruction sent by the dispatching center under the condition that node information of any edge node is not included in the access record of the target resource, wherein the third cache instruction is used for indicating the target edge node to acquire the target resource from a second edge node, the second edge node is a node with low bandwidth cost, the cost of bandwidth consumed by the target edge node for acquiring the target resource from the second edge node is smaller than a first threshold, and the dispatching center is further used for sending a second cache instruction to the second edge node, and the second cache instruction is used for indicating the second edge node to acquire the target resource from the source station or the middle layer; and responding to the third cache instruction, and acquiring the target resource from the second edge node.
7. A content distribution apparatus applied to a dispatch center, comprising:
the device comprises an acquisition unit, a target edge node and a storage unit, wherein the acquisition unit is used for acquiring a target cache request sent by the target edge node, and the target cache request is used for requesting to cache target resources in a source station or a middle layer into the target edge node;
The first sending unit is used for sending a target cache instruction to the target edge node, wherein the target cache instruction is used for indicating the target edge node to acquire the target resource from a first edge node, and the first edge node is an edge node which has previously requested the target resource;
Wherein the apparatus further comprises: a first recording unit, configured to record node information of an edge node requesting for a resource into an access record of the resource when a cache request for caching any resource in the source station or middle layer is acquired before a target cache request sent by a target edge node is acquired;
The apparatus further comprises: a third sending unit, configured to send, after obtaining a target cache request sent by a target edge node, a second cache instruction to a second edge node and send a third cache instruction to the target edge node if node information of any one edge node is not included in an access record of the target resource, where the second edge node is a node with low bandwidth cost, the second cache instruction is used to instruct the second edge node to obtain the target resource from the source station or a middle layer, and the third cache instruction is used to instruct the target edge node to obtain the target resource from the second edge node, and a cost of bandwidth consumed by the target edge node to obtain the target resource from the second edge node is less than a first threshold; and a third recording unit, configured to record node information of the second edge node and node information of the target edge node into an access record of the target resource.
8. A content distribution apparatus for use in an edge node, comprising:
A sending unit, configured to send a target cache request to a scheduling center, where the target cache request is used to request to cache a target resource in a source station or a middle layer into a target edge node;
the first receiving unit is used for receiving a target cache instruction returned by the dispatching center, wherein the target cache instruction is used for indicating the target edge node to acquire the target resource from a first edge node, and the first edge node is an edge node which has previously requested the target resource;
Wherein the scheduling center is further configured to record node information of an edge node requesting the resource into an access record of the resource when a cache request for caching the resource of any one of the source station and the middle layer is acquired, and the apparatus further includes: a third receiving unit, configured to receive, after sending a target cache request to a scheduling center, a third cache instruction sent by the scheduling center when node information of any one edge node is not included in an access record of the target resource, where the third cache instruction is used to instruct the target edge node to acquire the target resource from a second edge node, the second edge node is a node with low bandwidth cost, a cost of bandwidth consumed by the target edge node to acquire the target resource from the second edge node is less than a first threshold, and the scheduling center is further configured to send a second cache instruction to the second edge node, where the second cache instruction is used to instruct the second edge node to acquire the target resource from the source station or a middle layer; and the second caching unit is used for responding to the third caching instruction and acquiring the target resource from the second edge node.
9. A computer-readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, performs the method of any one of claims 1 to 5 or 6.
10. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method according to any of the claims 1-5 or 6 by means of the computer program.
CN202111152471.0A 2021-09-29 2021-09-29 Content distribution method, content distribution device, storage medium and electronic equipment Active CN113873302B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111152471.0A CN113873302B (en) 2021-09-29 2021-09-29 Content distribution method, content distribution device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111152471.0A CN113873302B (en) 2021-09-29 2021-09-29 Content distribution method, content distribution device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113873302A CN113873302A (en) 2021-12-31
CN113873302B true CN113873302B (en) 2024-04-26

Family

ID=79000535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111152471.0A Active CN113873302B (en) 2021-09-29 2021-09-29 Content distribution method, content distribution device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113873302B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114466018A (en) * 2022-03-22 2022-05-10 北京有竹居网络技术有限公司 Scheduling method and device for content distribution network, storage medium and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104717231A (en) * 2014-12-18 2015-06-17 北京蓝汛通信技术有限责任公司 Pre-distribution processing method and device of content distribution network
CN111263171A (en) * 2020-02-25 2020-06-09 北京达佳互联信息技术有限公司 Live streaming media data acquisition method and edge node area networking system
CN111770119A (en) * 2020-09-03 2020-10-13 云盾智慧安全科技有限公司 Website resource acquisition method, system, device and computer storage medium
CN112153160A (en) * 2020-09-30 2020-12-29 北京金山云网络技术有限公司 Access request processing method and device and electronic equipment
CN112688980A (en) * 2019-10-18 2021-04-20 上海哔哩哔哩科技有限公司 Resource distribution method and device, and computer equipment
WO2021135835A1 (en) * 2019-12-31 2021-07-08 北京金山云网络技术有限公司 Resource acquisition method and apparatus, and node device in cdn network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104717231A (en) * 2014-12-18 2015-06-17 北京蓝汛通信技术有限责任公司 Pre-distribution processing method and device of content distribution network
CN112688980A (en) * 2019-10-18 2021-04-20 上海哔哩哔哩科技有限公司 Resource distribution method and device, and computer equipment
WO2021135835A1 (en) * 2019-12-31 2021-07-08 北京金山云网络技术有限公司 Resource acquisition method and apparatus, and node device in cdn network
CN111263171A (en) * 2020-02-25 2020-06-09 北京达佳互联信息技术有限公司 Live streaming media data acquisition method and edge node area networking system
CN111770119A (en) * 2020-09-03 2020-10-13 云盾智慧安全科技有限公司 Website resource acquisition method, system, device and computer storage medium
CN112153160A (en) * 2020-09-30 2020-12-29 北京金山云网络技术有限公司 Access request processing method and device and electronic equipment

Also Published As

Publication number Publication date
CN113873302A (en) 2021-12-31

Similar Documents

Publication Publication Date Title
US10218806B2 (en) Handling long-tail content in a content delivery network (CDN)
US11032387B2 (en) Handling of content in a content delivery network
US10778801B2 (en) Content delivery network architecture with edge proxy
US8930538B2 (en) Handling long-tail content in a content delivery network (CDN)
US10601767B2 (en) DNS query processing based on application information
US8458290B2 (en) Multicast mapped look-up on content delivery networks
US10264090B2 (en) Geographical data storage assignment based on ontological relevancy
US20080208961A1 (en) Parallel retrieval system
CN107835437B (en) Dispatching method based on more cache servers and device
AU2011203246B2 (en) Content processing between locations workflow in content delivery networks
CN110830565B (en) Resource downloading method, device, system, electronic equipment and storage medium
CN109873855A (en) A kind of resource acquiring method and system based on block chain network
CN113873302B (en) Content distribution method, content distribution device, storage medium and electronic equipment
CN113630457A (en) Task scheduling method and device, computer equipment and storage medium
US10924573B2 (en) Handling long-tail content in a content delivery network (CDN)
KR20050060783A (en) Method for retrieving and downloading digital media files through network and medium on which the program for executing the method is recorded
CN115277851A (en) Service request processing method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant