CN110365545B - Distribution rate processing method, device, server and storage medium - Google Patents

Distribution rate processing method, device, server and storage medium Download PDF

Info

Publication number
CN110365545B
CN110365545B CN201910734906.9A CN201910734906A CN110365545B CN 110365545 B CN110365545 B CN 110365545B CN 201910734906 A CN201910734906 A CN 201910734906A CN 110365545 B CN110365545 B CN 110365545B
Authority
CN
China
Prior art keywords
traffic
rate
resource
server
time period
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910734906.9A
Other languages
Chinese (zh)
Other versions
CN110365545A (en
Inventor
秦汉中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201910734906.9A priority Critical patent/CN110365545B/en
Publication of CN110365545A publication Critical patent/CN110365545A/en
Application granted granted Critical
Publication of CN110365545B publication Critical patent/CN110365545B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/20Traffic policing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The disclosure relates to a method and a device for processing a sending rate, a server and a storage medium, and belongs to the technical field of internet. The method comprises the following steps: acquiring a first issuing rate requested by a traffic scheduling server in a first time period; acquiring a first flow in a first time period; acquiring target flow in a second time period; and determining a second issuing rate requested by the traffic scheduling server in a second time period according to the ratio of the first issuing rate to the first traffic and the target traffic. The method determines the issuing rate to be requested to the traffic scheduling server in the next time period according to the issuing rate requested to the traffic scheduling server in the previous time period, the actually generated traffic in the previous time period and the target traffic to be generated in the next time period, prejudges the traffic condition of the next time period, realizes the control of the traffic in the next time period, avoids the problem of traffic overflow, does not need to prepare excessive bandwidth resources, and improves the utilization rate of the bandwidth resources.

Description

Distribution rate processing method, device, server and storage medium
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a method and an apparatus for processing an issue rate, a server, and a storage medium.
Background
With the gradual expansion of the internet scale and the increasing abundance of internet resources, a large number of terminals can access the resources provided by the resource server, and flow can be generated on the resource server in the access process. In order to ensure stable operation and flow balance of a system, a plurality of resource servers are usually set, and a scheduling server selects a target resource server from the plurality of resource servers in a global load balancing scheduling manner, and schedules the target resource server to a terminal for the terminal to access.
If the traffic generated on the resource server exceeds the upper limit of the bandwidth resource of the resource server, the traffic will overflow, and the normal access of the resource server will be affected. To do this, the resource server typically needs to prepare enough bandwidth resources. However, if the traffic actually generated by the resource server does not reach the upper limit of the bandwidth resource of the resource server, the bandwidth resource is wasted, and the utilization rate of the bandwidth resource is not high.
Disclosure of Invention
The present disclosure provides a method and an apparatus for processing an issue rate, a server, and a storage medium, which can overcome the problem of low utilization rate of bandwidth resources in the related art.
According to a first aspect of the embodiments of the present disclosure, a method for processing an issue rate is provided, which is applied to a target resource server in a scheduling system, where the scheduling system includes a traffic scheduling server and a plurality of resource servers, and the traffic scheduling server is connected to the plurality of resource servers, and the method includes:
acquiring a first issuing rate requested to the traffic scheduling server in a first time period, wherein the issuing rate is the proportion of a resource link of the target resource server in resource links issued to at least one terminal by the traffic scheduling server;
acquiring first flow in the first time period, wherein the first flow is generated when the at least one terminal accesses a resource link of the target resource server;
acquiring target flow in a second time period, wherein the second time period is the next time period of the first time period;
and determining a second delivery rate requested to the traffic scheduling server within the second time period according to the ratio of the first delivery rate to the first traffic and the target traffic.
In a possible implementation manner, before obtaining the first delivery rate requested to the traffic scheduling server within the first time period, the method further includes:
and sending the first sending rate to the traffic scheduling server, wherein the traffic scheduling server is used for sending the resource link of the target resource server according to the first sending rate.
In another possible implementation manner, the determining, according to the ratio between the first delivery rate and the first traffic and the target traffic, a second delivery rate requested to the traffic scheduling server within the second time period includes:
determining a second delivery rate requested to the traffic scheduling server within the second time period by adopting the following formula according to the ratio between the first delivery rate and the first traffic and the target traffic:
Figure BDA0002161841980000021
wherein r is 2 Is the second delivery rate, r 1 Is the first sending rate, b 2 Is the target flow rate, b 1 Is the first flow rate.
In another possible implementation manner, the determining, according to the ratio between the first delivery rate and the first traffic and the target traffic, a second delivery rate requested to the traffic scheduling server within the second time period includes:
determining a second delivery rate requested to the traffic scheduling server within the second time period by adopting the following formula according to the ratio between the first delivery rate and the first traffic and the target traffic:
Figure BDA0002161841980000022
wherein r is 1 Is the first delivery rate, b 1 Is the first flow rate, r 2 Is the second delivery rate, b 2 For the target flow, Δ (·) is a preset function, t is a preset duration, and the preset duration is a time interval between a time when the flow scheduling server issues any resource link and a time when any resource link is accessed.
In another possible implementation manner, the determining, according to the ratio between the first delivery rate and the first traffic and the target traffic, a second delivery rate requested to the traffic scheduling server within the second time period includes:
when the first flow exceeds a preset flow upper limit, determining a second delivery rate requested to the flow scheduling server within the second time period by adopting the following formula according to the proportion between the first delivery rate and the first flow and the target flow:
a 1 =r 1 ×10%;
Figure BDA0002161841980000023
Figure BDA0002161841980000024
wherein r is 1 Is the first delivery rate, r 2 Is the second delivery rate, b 1 Is the first flow rate, b 2 For the target flow rate, a 1 To suppress the first delivery rate, a 2 Avg (a) as another delivery rate obtained by suppressing the first delivery rate 1 +a 2 ) Is a 1 And a 2 P is a preset inhibiting factor, t is a preset duration, and the preset duration is a time interval between a time when the traffic scheduling server issues any resource link and a time when any resource link is accessed.
In another possible implementation manner, after determining, according to the ratio between the first delivery rate and the first traffic and the target traffic, a second delivery rate requested to the traffic scheduling server within the second time period, the method further includes:
and sending the second sending rate to the traffic scheduling server, wherein the traffic scheduling server is used for sending the resource link of the target resource server according to the second sending rate.
According to a second aspect of the embodiments of the present disclosure, there is provided an issue rate processing apparatus, applied to a target resource server in a scheduling system, where the scheduling system includes a traffic scheduling server and a plurality of resource servers, and the traffic scheduling server is connected to the plurality of resource servers, the apparatus includes:
the system comprises an issuing rate obtaining unit, a sending rate obtaining unit and a sending unit, wherein the issuing rate obtaining unit is configured to obtain a first issuing rate requested by the traffic scheduling server in a first time period, and the issuing rate is the proportion of resource links of a target resource server in resource links issued by the traffic scheduling server to at least one terminal;
a first traffic obtaining unit configured to obtain first traffic in the first time period, where the first traffic is generated when the at least one terminal accesses a resource link of the target resource server;
a second flow rate obtaining unit configured to obtain a target flow rate in a second time period, which is a next time period of the first time period;
and the delivery rate determining unit is configured to determine a second delivery rate requested to the traffic scheduling server within the second time period according to the ratio between the first delivery rate and the first traffic and the target traffic.
In one possible implementation, the apparatus further includes:
a first sending unit, configured to send the first sending rate to the traffic scheduling server, where the traffic scheduling server is configured to send the resource link of the target resource server according to the first sending rate.
In another possible implementation manner, the sending rate determining unit includes:
a first determining subunit, configured to determine, according to the ratio between the first delivery rate and the first traffic and the target traffic, a second delivery rate requested to the traffic scheduling server within the second time period by using the following formula:
Figure BDA0002161841980000041
wherein r is 2 Is the second delivery rate, r 1 Is the first delivery rate, b 2 Is the target flow rate, b 1 Is the first flow rate. .
In another possible implementation manner, the sending rate determining unit includes:
a second determining subunit, configured to determine, according to the ratio between the first delivery rate and the first traffic and the target traffic, a second delivery rate requested to the traffic scheduling server within the second time period by using the following formula:
Figure BDA0002161841980000042
wherein r is 1 Is the first delivery rate, b 1 Is the first flow rate, r 2 Is the second delivery rate, b 2 For the target flow, Δ (·) is a preset function, t is a preset duration, and the preset duration is a time interval between a time when the flow scheduling server issues any resource link and a time when any resource link is accessed.
In another possible implementation manner, the sending rate determining unit includes:
a third determining subunit, configured to, when the first traffic exceeds a preset upper traffic limit, determine, according to a ratio between the first delivery rate and the first traffic and the target traffic, a second delivery rate requested to the traffic scheduling server within the second time period by using the following formula:
a 1 =r 1 ×10%;
Figure BDA0002161841980000043
Figure BDA0002161841980000044
wherein r is 1 Is the first delivery rate, r 2 Is the second delivery rate, b 1 Is the first flow rate, b 2 For the target flow rate, a 1 To suppress the first delivery rate, a 2 Avg (a) as another delivery rate obtained by suppressing the first delivery rate 1 +a 2 ) Is a 1 And a 2 P is a preset inhibiting factor, t is a preset duration, and the preset duration is a time interval between a time when the traffic scheduling server issues any resource link and a time when any resource link is accessed.
In another possible implementation manner, the apparatus further includes:
and the second sending unit is configured to send the second delivery rate to the traffic scheduling server, and the traffic scheduling server is configured to deliver the resource link of the target resource server according to the second delivery rate.
According to a third aspect of the embodiments of the present disclosure, there is provided a resource server, including:
one or more processors;
volatile or non-volatile memory for storing the one or more processor-executable commands;
wherein the one or more processors are configured to perform the down rate processing method of the first aspect.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium, wherein instructions of the storage medium, when executed by a processor of a resource server, enable the resource server to execute the delivery rate processing method of the first aspect.
According to a fifth aspect of the embodiments of the present disclosure, there is provided a computer program product, wherein when the instructions in the computer program product are executed by a processor of a resource server, the resource server is enabled to execute the method for processing a down rate according to the first aspect.
The method, the device, the server and the storage medium provided by the embodiment of the disclosure obtain a first sending rate requested by the flow scheduling server in a first time period, obtain a first flow generated when a resource link of a target resource server is accessed by at least one terminal in the first time period, obtain a target flow in a second time period, and determine a second sending rate requested by the flow scheduling server in the second time period according to a ratio between the first sending rate and the first flow and the target flow. The method can determine the issuing rate to be requested to the traffic scheduling server in the next time period according to the issuing rate requested to the traffic scheduling server in the previous time period, the traffic actually generated in the previous time period and the target traffic required to be generated in the next time period, can prejudge the traffic condition of the next time period, realizes the control of the traffic of the next time period, and avoids the problem of traffic overflow. And excessive bandwidth resources do not need to be prepared, so that the waste of the bandwidth resources is avoided, and the utilization rate of the bandwidth resources is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a schematic illustration of an implementation environment provided in accordance with an example embodiment.
Fig. 2 is a flow chart illustrating a method of down rate processing according to an example embodiment.
Fig. 3 is a flow chart illustrating another method of lower rate processing according to an example embodiment.
Fig. 4 is a schematic view showing a flow rate curve according to the related art.
Fig. 5 is a diagram illustrating a lower rate delivery curve according to an example embodiment.
FIG. 6 is a flow curve diagram illustrating an exemplary embodiment.
Fig. 7 is a flow chart illustrating a method of traffic scheduling in accordance with an example embodiment.
Fig. 8 is a block diagram illustrating a lower rate processing apparatus according to an example embodiment.
Fig. 9 is a block diagram illustrating another lower rate processing apparatus according to an example embodiment.
Fig. 10 is a block diagram illustrating a terminal according to an example embodiment.
Fig. 11 is a schematic diagram illustrating a configuration of a server according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below do not represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
FIG. 1 is a schematic diagram of an implementation environment provided in accordance with an example embodiment, as shown in FIG. 1, including: the system comprises at least one terminal 101, a traffic scheduling server 102 and a plurality of resource servers 103, wherein the at least one terminal 101 is connected with the traffic scheduling server 102, the traffic scheduling server 102 is connected with the plurality of resource servers 103, and the at least one terminal 101 is connected with the plurality of resource servers 103.
The terminal 101 may be various types of terminals such as a portable terminal, a pocket terminal, a handheld terminal, and the like, such as a mobile phone, a computer, a tablet computer, and the like. The traffic scheduling server 102 may be a server, a server cluster composed of several servers, or a cloud computing service center. The resource server 103 may be a server, a server cluster composed of several servers, or a cloud computing service center.
Each resource server 103 of the plurality of resource servers 103 stores one or more resources, and further sets a resource link pointing to the resource for each resource for any terminal 101 to access. Any terminal 110 can access the resource on the resource server 103 through the resource link provided by any resource server 103, and the access process consumes the bandwidth resource of the resource server 103 and generates traffic on the resource server 103.
The resources stored by the resource servers 103 may be the same or different, and the configured bandwidth resources may be the same or different.
The traffic scheduling server 102 can issue the resource links of the resource servers 103 to at least one terminal 101, and the more resource links issued for the resource servers 103, the more traffic generated on the resource servers 103 may be, so that the traffic scheduling server 102 implements traffic scheduling for the resource servers 103 by issuing the resource links.
In one possible implementation, at least one terminal 101 installs a target application client, and by associating the target application client with the traffic scheduling server 102 and the resource servers 103, the traffic scheduling server 102 issues a resource link for the target application client, and the resource servers 103 provide resources for the target application client.
The target application client may be any application client with a resource access function, such as a short video application client, an information presentation client, an instant messaging application client, and the like.
Fig. 2 is a flowchart illustrating a method for processing a down rate according to an exemplary embodiment, where as shown in fig. 2, an execution subject of the disclosed embodiment is any resource server shown in fig. 1, and the method includes:
201. acquiring a first issuing rate requested to a traffic scheduling server in a first time period, wherein the issuing rate is the proportion of resource links of a target resource server in resource links issued to at least one terminal by the traffic scheduling server;
202. acquiring first flow in a first time period, wherein the first flow is generated when at least one terminal accesses a resource link of a target resource server;
203. acquiring target flow in a second time period, wherein the second time period is the next time period of the first time period;
204. and determining a second issuing rate requested to the traffic scheduling server in a second time period according to the ratio of the first issuing rate to the first traffic and the target traffic.
The method provided by the embodiment of the disclosure obtains a first sending rate requested by a flow scheduling server in a first time period, obtains a first flow generated when a resource link of a target resource server is accessed by at least one terminal in the first time period, obtains a target flow in a second time period, and determines a second sending rate requested by the flow scheduling server in the second time period according to a ratio between the first sending rate and the first flow and the target flow. The method can determine the issuing rate to be requested to the traffic scheduling server in the next time period according to the issuing rate requested to the traffic scheduling server in the previous time period, the traffic actually generated in the previous time period and the target traffic required to be generated in the next time period, can prejudge the traffic condition of the next time period, realizes the control of the traffic of the next time period, and avoids the problem of traffic overflow. And excessive bandwidth resources do not need to be prepared, so that the waste of the bandwidth resources is avoided, and the utilization rate of the bandwidth resources is improved.
In one possible implementation manner, before obtaining a first delivery rate requested to the traffic scheduling server within a first time period, the method further includes:
and sending the first sending rate to a traffic scheduling server, wherein the traffic scheduling server is used for sending the resource link of the target resource server according to the first sending rate.
In another possible implementation manner, determining a second delivery rate requested by the traffic scheduling server within a second time period according to a ratio between the first delivery rate and the first traffic and the target traffic, includes:
according to the proportion between the first issuing rate and the first flow and the target flow, determining a second issuing rate requested to the flow scheduling server in a second time period by adopting the following formula:
Figure BDA0002161841980000081
wherein r is 2 Is the second issue rate, r 1 Is the first incidence, b 2 Is a target flow rate, b 1 Is the first flow rate.
In another possible implementation manner, determining a second delivery rate requested by the traffic scheduling server within a second time period according to a ratio between the first delivery rate and the first traffic and the target traffic, includes:
according to the proportion between the first issuing rate and the first flow and the target flow, determining a second issuing rate requested to the flow scheduling server in a second time period by adopting the following formula:
Figure BDA0002161841980000082
wherein r is 1 Is the first incidence, b 1 At a first flow rate, r 2 Is the second issue rate, b 2 For the target traffic, Δ (·) is a preset function, t is a preset duration, and the preset duration is a time interval between a time when the traffic scheduling server issues any resource link and a time when any resource link is accessed.
In another possible implementation manner, determining a second delivery rate requested by the traffic scheduling server within a second time period according to a ratio between the first delivery rate and the first traffic and the target traffic, includes:
when the first flow exceeds the preset flow upper limit, determining a second issuing rate requested to the flow scheduling server in a second time period by adopting the following formula according to the proportion between the first issuing rate and the first flow and the target flow:
a 1 =r 1 ×10%;
Figure BDA0002161841980000083
Figure BDA0002161841980000084
wherein r is 1 Is the first incidence, r 2 Is the second issue rate, b 1 Is a first flow rate, b 2 Is a target flow rate, a 1 To suppress the first delivery rate, a 2 Avg (a) is another delivery rate obtained by suppressing the first delivery rate 1 +a 2 ) Is a 1 And a 2 P is a preset inhibiting factor, t is a preset duration, and the preset duration is a time interval between the time when the traffic scheduling server issues any resource link and the time when any resource link is accessed.
In another possible implementation manner, after determining a second delivery rate requested to the traffic scheduling server within a second time period according to a ratio between the first delivery rate and the first traffic and the target traffic, the method further includes:
and sending the second sending rate to a traffic scheduling server, wherein the traffic scheduling server is used for sending the resource link of the target resource server according to the second sending rate.
Fig. 3 is a flowchart illustrating another method for processing a lower rate according to an exemplary embodiment, where as shown in fig. 3, the interaction subject of the disclosed embodiment is at least one terminal, a traffic scheduling server, and a plurality of resource servers, and the method includes:
301. the target resource server obtains a first sending rate requested by the traffic scheduling server in a first time period.
The embodiment of the disclosure is applied to a scheduling system, and the scheduling system comprises a flow scheduling server and a plurality of resource servers. The target resource server may be any resource server in the scheduling system.
The sending rate is the proportion of the resource link of the target resource server in the resource links sent by the traffic scheduling server to the at least one terminal. For each time segment, the target resource server can determine the sending rate in the time segment, send the sending rate to the traffic scheduling server, and request the traffic scheduling server to send the resource link of the target resource server according to the sending rate in the time segment.
The target resource server sets at least two time periods according to a fixed period, and the time span of each time period is equal to the fixed period. The target resource server can determine the sending rate of the next time period according to the traffic generation condition of the previous time period each time, thereby realizing the periodic setting of the sending rate. The fixed period may be set according to a requirement of the target resource server for updating the delivery rate, and may be, for example, 1 second, 1 minute, or 1 hour.
Optionally, at the starting time of each time period, the target resource server determines the delivery rate in the time period, and sends the delivery rate to the traffic scheduling server. And at the starting time of the next time period, the target resource server determines the issuing rate in the next time period again and sends the issuing rate to the traffic scheduling server.
And when the sending rate is determined each time, the target resource server stores the sending rate in the database, and when the sending rate of the previous time period needs to be used in the next time period, the sending rate of the previous time period is directly called from the database of the target resource server.
In the embodiment of the present disclosure, a first time period and a second time period are taken as an example, and a process of determining an issue rate in the second time period according to the first time period is described, where the first time period may be any time period, the second time period is a next time period of the first time period, and processes of determining issue rates in other time periods are similar to the embodiment of the present disclosure, and are not described in detail herein.
The sending rate in the first time period determined by the target resource server is called a first sending rate, and the sending rate in the second time period determined by the target resource server is called a second sending rate. The first delivery rate may be obtained by the target resource server according to the delivery rate in the previous time period of the first time period, and the obtaining manner is similar to the process of obtaining the second delivery rate according to the first delivery rate in the subsequent step, which is not described here for the moment. Or, the target resource server provides service for the first time in a first time period, where the first time period is an initial time period of the target resource server, and the first delivery rate is determined as a preset delivery rate, where the preset delivery rate may be a delivery rate randomly determined by the target resource server, or a delivery rate determined according to a lower flow limit of the target resource server, and the like.
302. The target resource server obtains a first flow in a first time period.
The first traffic is generated by at least one terminal accessing a resource link of a target resource server for a first time period.
Optionally, in a first time period, each time a target resource server receives an access request sent by any terminal, a resource corresponding to a resource link is obtained according to the resource link carried by the access request and sent to the terminal, and a generated flow is counted in the process of sending the resource, and the flow is recorded, so that a total flow in the first time period can be counted as a first flow. And the target resource server stores the counted first flow so as to facilitate subsequent calling.
303. And the target resource server acquires the target flow in the second time period.
The target flow is determined by the target resource server according to the flow required to be generated in the second time period, and may be set by a maintenance person of the target resource server, or may be calculated by the target resource server.
Optionally, the target resource server configures a bandwidth resource, where a traffic that can be supported by an upper limit of the bandwidth resource is the upper limit of the traffic of the target resource server, but in an actual operation process of the target resource server, in order to ensure stable service, an achieved traffic is smaller than the upper limit of the traffic. Therefore, the traffic that is achieved on the premise that the target resource server ensures that the service is stable can be determined as the target traffic.
For example, the bandwidth resource of the target resource server is 100 million, and the target resource server can provide stable service when the traffic does not exceed 90 million, then the target traffic of the target resource server is 90 million.
It should be noted that, in each time period, the target traffic set by the target resource server may be the same or different. For example, the target resource server may set the same target traffic for most of the time period, and when an abnormal condition occurs in a certain time period, the target traffic may be adjusted according to the requirement.
304. And the target resource server determines a second issuing rate requested by the traffic scheduling server in a second time period according to the ratio of the first issuing rate to the first traffic and the target traffic.
In the embodiment of the disclosure, in the same time period, after the target resource server requests the traffic scheduling server for the delivery rate, the traffic scheduling server delivers the resource link according to the delivery rate, so as to generate traffic on the target resource server. Therefore, there is an association relationship between the issued rate and the traffic.
However, since the target resource server cannot know the total number of the resource links issued by the traffic scheduling server in each time period in advance, and cannot predict when the terminal accesses the resource links after acquiring the resource links, it is not possible to accurately determine what kind of association relationship exists between the issuing rate and the traffic.
Considering that the association relationship between the delivery rate and the traffic in different time periods is the same, the association relationship between the delivery rate and the traffic in the first time period may be adopted to estimate the association relationship between the delivery rate and the traffic in the second time period.
Therefore, the target resource server obtains the proportion between the first sending rate and the first flow rate, the incidence relation between the sending rate and the flow rate in the first time period is expressed by the proportion, and the target resource server already determines the target flow rate in the second time period, so that the second sending rate in the second time period can be determined according to the proportion between the first sending rate and the first flow rate and the target flow rate.
Optionally, the step 304 may include any one of the following steps 3041 and 3043:
3041. according to the proportion between the first issuing rate and the first flow and the target flow, determining a second issuing rate requested to the flow scheduling server in a second time period by adopting the following formula:
Figure BDA0002161841980000111
wherein r is 2 Is the second issue rate, r 1 Is the first incidence, b 2 Is a target flow rate, b 1 Is the first flow rate.
The reason why the second delivery rate is determined by the above formula is that:
assuming that the total number of resource links issued by a traffic scheduling server is m, the issuing rate of a target resource server is r, a conversion coefficient is lambda, the conversion coefficient represents the proportion of the resource links accessed by the terminal in the resource links of the target resource server received by the terminal, lambda belonging to (0.06-0.15) can be obtained through multiple tests, the number of the resource links of the target resource server actually accessed by the terminal is n, the traffic generated by the resource links accessed by the terminal to the target resource server is b, and the traffic coefficient is beta, the traffic coefficient represents the proportion of the accessed one resource link to the actually generated traffic, and beta belonging to (3-8) can be obtained through multiple tests.
Then the following relationship holds:
n=m×r×λ;
b=n×β;
comparing the second time period with the first time period, assuming that the total number m of the resource links issued by the traffic scheduling server is unchanged, therefore, the following association relationship can be obtained:
b 1 =m×r 1 ×λ×β;
b 2 =m×r 2 ×λ×β;
namely:
Figure BDA0002161841980000112
3042. according to the proportion between the first issuing rate and the first flow and the target flow, determining a second issuing rate requested to the flow scheduling server in a second time period by adopting the following formula:
Figure BDA0002161841980000113
wherein r is 1 Is the first incidence, b 1 At a first flow rate, r 2 Is the second issue rate, b 2 For the target traffic, Δ (·) is a preset function, t is a preset duration, and the preset duration is a time interval between a time when the traffic scheduling server issues any resource link and a time when any resource link is accessed.
Considering that there is delay in the process of accessing the resource link by the terminal after the traffic scheduling server issues the resource link of the target resource server according to the issue rate in a certain time period. That is, during the time period some resource links are accessed, resulting in traffic on the target resource server, but some resource links are temporarily not accessed, resulting in traffic on the target resource server, and during the time period after the time period, once these resource links are accessed, traffic can still be generated on the target resource server. Then, only according to the ratio between the first delivery rate and the first flow rate, the correlation between the delivery rate and the flow rate cannot be accurately reflected, and the influence of the time delay needs to be considered.
Therefore, the preset duration is introduced in the step 3042, and the preset duration can be determined by counting the time intervals between the issuing time and the accessing time of a large number of resource links, and the issuing rates are uniformly distributed in the preset duration, so as to implement dynamic adjustment of the traffic.
For example, the time span of each time period is 1 minute, the terminal receives 10 resource links issued by the traffic scheduling server in the first time period, and only 1 resource link is accessed in the first time period, and the traffic actually generated by the target resource server is only the traffic generated by accessing the 1 resource link. And, the user's access to these 10 resource links takes a total of 7 minutes, spanning 7 time periods, with the resulting traffic being spread over these 7 time periods.
For another example, the experimental data shows that, for the issued resource link, about 50% of the traffic is effective within 3 minutes after the issue, and about more than 95% of the traffic is effective within 7 minutes.
3043. When the first flow exceeds the preset flow upper limit, determining a second issuing rate requested to the flow scheduling server in a second time period by adopting the following formula according to the proportion between the first issuing rate and the first flow and the target flow:
a 1 =r 1 ×10%;
Figure BDA0002161841980000121
Figure BDA0002161841980000122
wherein r is 1 Is the first incidence, r 2 Is the second issue rate, b 1 Is a first flow rate, b 2 Is a target flow rate, a 1 To suppress the first delivery rate, a 2 Avg (a) is another delivery rate obtained by suppressing the first delivery rate 1 +a 2 ) Is a 1 And a 2 Rho is a preset inhibiting factor, t is a preset duration, and the preset duration is a time interval between the time when the traffic scheduling server issues any resource link and the time when any resource link is accessed.
The preset flow upper limit is the maximum flow which can be reached by the target resource server on the premise of ensuring the service stability, when the flow of the target resource server is smaller than the preset flow upper limit, the target resource server can provide stable service, and when the flow of the target resource server is larger than the preset flow upper limit, the issuing rate of the target resource server needs to be effectively controlled, so that the flow generated by the target resource server in the next time period is smaller than the preset flow upper limit.
In addition, the target resource server may further set a preset lower flow limit, which may be set according to a minimum flow when the bandwidth resource of the target resource server is effectively utilized.
When the flow of the target resource server exceeds the preset flow upper limit, the issuing rate is restrained by adopting the formula, wherein a is obtained by multiplying the first issuing rate by 10 percent 1 The first issue rate is suppressed, therefore, a 1 The method can generate a strong inhibition effect on the delivery rate to realize strong inhibition on the delivery rate obtained after the first delivery rate is inhibited. And multiplying the issuing rate obtained by the first issuing rate, the first flow and the target flow by a preset inhibition factor rho to obtain a 2 The first incidence is suppressed in another way, so that a 2 In order to obtain another delivery rate after the first delivery rate is inhibited, a weak inhibition effect can be generated on the delivery rate, weak inhibition is realized, and the delivery rate after strong inhibition and weak inhibition is comprehensively considered to obtain a second delivery rate.
Optionally, it is detected whether the flow rate in each time period exceeds the preset flow rate upper limit, and when the flow rates in the consecutive preset number of time periods exceed the preset flow rate upper limit, the manner of step 3043 is adopted to suppress the delivery rate. When the flow rate within the continuous preset number of time periods is not detected to exceed the preset flow rate upper limit, the step 3043 is not performed, but the delivery rate is determined in a manner. Therefore, the method can ensure that the sending rate is inhibited only when the continuous bandwidth is out of range, avoid inhibiting the sending rate when the temporary bandwidth is out of range and reduce the influence of the temporary bandwidth to the minimum.
It should be noted that the steps 3041 and 3043 are optional steps, and may or may not be executed. In addition, the target resource server may also acquire the second delivery rate in other manners, and the specific manner for acquiring the second delivery rate is not limited in the embodiment of the present disclosure.
305. And the target resource server sends the second sending rate to the traffic scheduling server.
And after the second sending rate is obtained, the target resource server sends the second sending rate to the traffic scheduling server to request the traffic scheduling server to send the resource link of the target resource server according to the second sending rate.
It should be noted that, the above-mentioned steps 301-305 may be executed at the starting time of the second time period, or may also be executed at a time corresponding to a preset time length before the starting time of the second time period, where the preset time length is less than the time span of the time period, and may be, for example, 0.01 second, 0.1 second, and the like.
306. And the traffic scheduling server issues the resource link of the target resource server to at least one terminal according to the second issuing rate.
And the traffic scheduling server receives a second sending rate sent by the target resource server, performs traffic scheduling according to the second sending rate, and sends the resource link to at least one terminal, wherein the proportion of the resource link of the target resource server in the sent resource link is the second sending rate.
For a specific process of the traffic scheduling server issuing the resource link, please refer to the embodiment shown in fig. 4 below.
307. When at least one terminal detects the trigger operation of the resource link of the target resource server, the terminal sends a resource request carrying the resource link to the target resource server.
The terminal displays the resource link after receiving the resource link issued by the traffic scheduling server, the user can perform triggering operation on the resource link, and the terminal sends a resource request carrying the resource link to a target resource server when detecting the triggering operation.
The trigger operation may be a click operation, a slide operation, or the like.
Optionally, the target resource server stores one or more resources, sets a resource identifier of each resource, and can distinguish different resources according to the resource identifier. The resource may include various types of resources such as text, video, picture, audio, and the like, and the resource identifier may be a resource name, a storage address of the resource, a resource number, or the like.
The resource link of the target resource server comprises address information of the target resource server and a resource identifier of a resource corresponding to the resource link, and the terminal can send a resource request carrying the resource link to the target resource server corresponding to the address information according to the address information in the resource link.
After the terminal displays the resource link, the user can immediately trigger the resource link, or trigger the resource link after a period of time. The embodiments of the present disclosure do not limit the timing of triggering the resource link.
308. And when the target resource server receives the resource request, acquiring the resource corresponding to the resource link, and sending the resource to the terminal, wherein the resource sending process generates flow on the target resource server.
And after receiving the resource request, the target resource server acquires the resource corresponding to the resource identifier according to the resource identifier included in the resource link in the resource request, sends the resource to the terminal and displays the resource by the terminal. In addition, the resource sending process also generates flow on the target resource server, and the target resource server can count the flow generated in the resource sending process.
It should be noted that, in the embodiment of the present disclosure, the target resource server in the scheduling system is merely taken as an example to describe the process of determining the delivery rate, and the target resource server may be any resource server in the multiple resource servers. The process of determining the sending rate by other resource servers is similar to the embodiment of the present disclosure, and is not described in detail herein.
In the related art, a scheduling system includes a scheduling server and a plurality of resource servers connected to the scheduling server, a terminal sends a resource request to the scheduling server, when receiving the resource request, the scheduling server selects a target resource server from the plurality of resource servers by using a global load balancing scheduling method, sends an IP (Internet Protocol) address of the target resource server to the terminal, the terminal sends the resource request to the target resource server according to the IP address, and acquires a resource from the target resource server, where the resource acquisition process generates traffic on the target resource server. However, the method is suitable for constructing a large-scale scheduling system, the requirement on the resource server is high, and the resource server needs to prepare enough bandwidth resources in advance. However, if the traffic actually generated by the resource server does not reach the upper limit of the bandwidth resource of the resource server, the bandwidth resource is wasted, and the utilization rate of the bandwidth resource is not high.
In the method provided by the embodiment of the disclosure, a target resource server obtains a first sending rate requested by a target resource server in a first time period, obtains a first flow generated when at least one terminal accesses a resource link of the target resource server in the first time period, obtains a target flow in a second time period, determines a second sending rate requested by the target resource server in the second time period according to the ratio of the first sending rate to the first flow and the target flow, sends the second sending rate to the target resource server, and sends the resource link of the target resource server by the target resource server according to the second sending rate. The method can determine the issuing rate of the request to the traffic scheduling server in the next time period according to the issuing rate of the request to the traffic scheduling server in the previous time period, the traffic actually generated in the previous time period and the target traffic required to be generated in the next time period, changes the passive mode into the active mode, can prejudge the traffic condition of the next time period on the premise of ensuring the service stability, realizes the control of the traffic of the next time period, avoids the problem of traffic overflow and improves the sensitivity of traffic sensing. And the control of bandwidth resources is realized by adjusting the flow, excessive bandwidth resources do not need to be prepared for preventing the high flow, the waste of the bandwidth resources is avoided, and the utilization rate of the bandwidth resources is improved.
In addition, a scheme for performing traffic scheduling through a Domain Name System (DNS) server is also proposed in the related art, in which a terminal sends a Domain Name resolution request to the DNS server, the DNS server selects an IP address of a resource server from a plurality of IP addresses corresponding to a Domain Name according to a load balancing policy based on the Domain Name provided by the terminal, and sends the IP address to the terminal, and the terminal accesses resources provided by the resource server according to the IP address of the resource server. However, this method needs to pass through the DNS server of the operator, and is greatly influenced by the operator. The method provided by the embodiment of the disclosure can bypass the DNS, avoids the influence caused by the DNS, and directly acts on the terminal, thereby realizing accurate flow control and improving the utilization rate of equipment.
The traffic scheduling is performed by adopting a conventional manner in the related art and the method provided by the embodiment of the present disclosure, and the scheduling results are obtained as follows:
when traffic is scheduled in a conventional manner in the related art, a traffic curve generated by a resource server in each time period is as shown in fig. 4, an abscissa represents time, and an ordinate represents a ratio of generated traffic, the traffic curve generates higher traffic in a part of the time period and lower traffic in a part of the time period, in order to enable the resource server to provide stable service also in the time period with higher traffic, sufficient bandwidth resources need to be prepared, and the utilization rate of the bandwidth resources in the time period with lower traffic is low, which causes waste of bandwidth resources.
When the method provided by the embodiment of the present disclosure is used to schedule traffic, a curve of the requested delivery rate of the resource server in each time period is shown in fig. 5, an abscissa represents time, an ordinate represents delivered resource proportion, and a curve of traffic generated by the resource server in each time period is shown in fig. 6. The resource server is provided with 32% of bandwidth resources, and the upper limit of the bandwidth is controlled to 31% in order to provide a stable service.
Comparing fig. 5 and fig. 6, when the traffic generated by the resource server is high, a lower delivery rate is determined in the next time period, and the traffic generated by the resource server is reduced by reducing the delivery rate. When the flow generated by the resource server is lower, a higher issuing rate is determined in the next time period, and the flow generated by the resource server is increased by increasing the issuing rate. The control method can keep the stable fluctuation of the flow, control the flow below the upper limit of the bandwidth, avoid preparing excessive bandwidth resources and improve the bandwidth utilization rate.
Fig. 7 is a flowchart illustrating a traffic scheduling method according to an exemplary embodiment, where as shown in fig. 7, the interaction subject of the disclosed embodiment is at least one terminal, a traffic scheduling server, and a plurality of resource servers, and the method includes:
701. and the resource servers send the sending rate to the traffic scheduling server.
In each time period, the resource servers send the determined sending rates to the traffic scheduling server, and the sending rates of different resource servers can be the same or different.
702. And the traffic scheduling server receives the sending rates sent by the plurality of resource servers.
703. And the traffic scheduling server stores the issuing rates of the plurality of resource servers.
Optionally, each resource server has corresponding address information, and the traffic scheduling server may store the delivery rate of each resource server in the database in correspondence with the address information, so as to distinguish the delivery rates of different resource servers.
Optionally, after receiving the multiple delivery rates sent by the multiple resource servers, the traffic scheduling server sums the multiple delivery rates. And when the sum of the multiple sending rates is equal to 100%, the traffic scheduling server stores the received sending rates of the multiple resource servers, and sends the resource links of the multiple resource servers respectively according to the stored sending rates of the multiple resource servers in the next time period before receiving the sending rates sent by the multiple resource servers in the next time period.
When the sum of the multiple delivery rates is not equal to 100%, the traffic scheduling server obtains a proportional relation among the multiple delivery rates, and recalculates the delivery rates of the multiple resource servers according to the proportional relation, so that the recalculated sum of the multiple delivery rates is equal to 100%. And returning each recalculated delivery rate to the corresponding resource server, wherein each resource server replaces the previously stored delivery rate with the delivery rate returned by the flow scheduling server, the flow scheduling server stores the recalculated delivery rates of the plurality of resource servers, and the resource links of the plurality of resource servers are respectively delivered according to the stored delivery rates of the plurality of resource servers before receiving the delivery rates sent by the plurality of resource servers in the next time period.
For example, in a time period, three resource servers send down rates to the traffic scheduling server, where the down rate sent by each resource server is shown in table 1:
TABLE 1
Resource server Delivery rate
Resource server 1 40%
Resource server 2 30%
Resource server 3 50%
The sum of the delivery rates of the three resource servers is 120%, and at this time, the traffic scheduling server recalculates the new delivery rates according to the proportional relationship among the delivery rates of the three resource servers, as shown in table 2:
TABLE 2
Resource server Delivery rate
Resource server 1 33.3%
Resource server 2 25%
Resource server 3 41.7%
The traffic scheduling server sends the recalculated delivery rate to the corresponding resource server, and each resource server replaces the previously stored delivery rate with the delivery rate sent by the traffic scheduling server, for example, the resource server 1 replaces the original 40% with 33.3%.
704. And the traffic scheduling server selects one resource server according to the issuing rates of the plurality of resource servers and issues the resource link of the selected resource server to at least one terminal.
Optionally, this step 704 may include steps 7041 or 7042:
7041. when a resource request sent by any terminal is received, one resource server is selected according to the issuing rate of the multiple resource server requests, and the resource link corresponding to the resource request on the selected resource server is issued to the terminal.
In each time period, in the process of issuing the resource link, the traffic scheduling server counts the proportion of the resource links of the plurality of resource servers in the resource link issued in the time period, that is, the issuing rate of the plurality of resource servers in the time period. When the traffic scheduling server receives a resource request sent by any terminal, the traffic scheduling server obtains the difference between the sending rate requested by each resource server and the sending rate that has been reached, selects any resource server with the difference larger than 0 from a plurality of resource services, and sends the resource link corresponding to the resource request on the selected resource server to the terminal, or selects the resource server with the largest difference, and sends the resource link corresponding to the resource request on the selected resource server to the terminal.
The traffic scheduling server may store a resource link provided by each resource server, where the resource link includes a resource identifier. And the resource request sent by the terminal carries the resource identifier to be requested, so that the traffic scheduling server acquires the resource link provided by the resource server, including the resource identifier, after selecting the resource server, and sends the resource link to the terminal.
7042. And issuing the resource links of the resource servers to at least one terminal according to the issuing rate of the request of the resource servers.
The traffic scheduling server may recommend a resource link for the at least one terminal according to a recommendation policy.
Taking the traffic scheduling server to recommend resources for a certain terminal as an example, in each time period, the traffic scheduling server may count the respective proportions of the resource links of the plurality of resource servers in the resource links that have been delivered in the time period, that is, the delivery rates that have been achieved by the plurality of resource servers in the time period. The traffic scheduling server can compare the delivery rates which have been reached by the resource servers, select the resource server corresponding to the minimum delivery rate, and deliver the resource link on the resource server to the terminal; or the resource link of the corresponding proportion on each resource server can be issued to the terminal according to the reciprocal of the proportional relation among the issued rates which have been reached by the plurality of resource servers.
Or acquiring the difference between the request issuing rate of each resource server and the achieved issuing rate, selecting any resource server with the difference larger than 0 from the plurality of resource services, and issuing the resource link on the selected resource server to the terminal, or selecting the resource server with the largest difference and issuing the resource link on the selected resource server to the terminal.
The traffic scheduling server may store the resource links provided by each resource server, and may select the resource links provided by a resource server according to a recommendation policy each time the resource link of a certain resource server is issued.
According to the method provided by the embodiment of the disclosure, the traffic scheduling server can issue the resource links of the resource servers according to the issuing rates requested by the resource servers, and the resource servers can determine the issuing rate to be requested to the traffic scheduling server in the next time period according to the issuing rate requested to the traffic scheduling server in the previous time period, the actually generated traffic in the previous time period and the target traffic required to be generated in the next time period, and can prejudge the traffic condition in the next time period, thereby realizing the control of the traffic in the next time period and avoiding the problem of traffic overflow. And excessive bandwidth resources do not need to be prepared, so that the waste of the bandwidth resources is avoided, and the utilization rate of the bandwidth resources is improved.
Fig. 8 is a block diagram illustrating a lower rate processing apparatus according to an exemplary embodiment, as shown in fig. 8, the apparatus including:
an issue rate obtaining unit 801 configured to obtain a first issue rate requested by the traffic scheduling server within a first time period, where the issue rate is a proportion occupied by a resource link of a target resource server in resource links issued by the traffic scheduling server to at least one terminal;
a first traffic obtaining unit 802, configured to obtain first traffic in a first time period, where the first traffic is generated when at least one terminal accesses a resource link of a target resource server;
a second flow acquiring unit 803, configured to acquire a target flow in a second time period, where the second time period is a time period next to the first time period;
the delivery rate determining unit 804 is configured to determine a second delivery rate requested to the traffic scheduling server within a second time period according to a ratio between the first delivery rate and the first traffic and the target traffic.
In one possible implementation, as shown in fig. 9, the apparatus further includes:
a first sending unit 805 configured to send the first sending rate to a traffic scheduling server, where the traffic scheduling server is configured to send the resource link of the target resource server according to the first sending rate.
In another possible implementation manner, the sending rate determining unit 804 includes:
a first determining subunit 8041, configured to determine, according to a ratio between the first delivery rate and the first traffic and the target traffic, a second delivery rate requested to the traffic scheduling server within the second time period by using the following formula:
Figure BDA0002161841980000191
wherein r is 2 Is the second issue rate, r 1 Is the first incidence, b 2 Is a target flow rate, b 1 Is the first flow rate.
In another possible implementation manner, the sending rate determining unit 804 includes:
a second determining subunit 8042, configured to determine, according to the ratio between the first delivery rate and the first traffic and the target traffic, a second delivery rate requested to the traffic scheduling server within the second time period by using the following formula:
Figure BDA0002161841980000192
wherein r is 1 Is the first incidence, b 1 At a first flow rate, r 2 Is the second issue rate, b 2 For the target traffic, Δ (·) is a preset function, t is a preset duration, and the preset duration is a time interval between a time when the traffic scheduling server issues any resource link and a time when any resource link is accessed.
In another possible implementation manner, the transmission rate determining unit 804 includes:
a third determining subunit 8043, configured to, when the first traffic exceeds the preset upper traffic limit, determine, according to a ratio between the first issuance rate and the first traffic and the target traffic, a second issuance rate requested by the traffic scheduling server within a second time period by using the following formula:
a 1 =r 1 ×10%;
Figure BDA0002161841980000193
Figure BDA0002161841980000194
wherein r is 1 Is the first incidence, r 2 Is the second issue rate, b 1 Is a first flow rate, b 2 Is a target flow rate, a 1 To suppress the first delivery rate, a 2 Avg (a) is another delivery rate obtained by suppressing the first delivery rate 1 +a 2 ) Is a 1 And a 2 Rho is a preset inhibiting factor, t is a preset duration, and the preset duration is a time interval between the time when the traffic scheduling server issues any resource link and the time when any resource link is accessed.
In another possible implementation manner, the apparatus further includes:
a second sending unit 806, configured to send the second delivery rate to the traffic scheduling server, where the traffic scheduling server is configured to deliver the resource link of the target resource server according to the second delivery rate.
It should be noted that: in the delivery rate processing apparatus provided in the foregoing embodiment, when performing the delivery rate processing, only the division of each functional unit is illustrated, and in practical applications, the function allocation may be completed by different functional units according to needs, that is, the internal structure of the resource server is divided into different functional units to complete all or part of the functions described above. In addition, the sending rate processing apparatus and the sending rate processing method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
With regard to the apparatus in the above-described embodiment, the specific manner in which each unit performs the operation has been described in detail in the embodiment related to the method, and will not be described in detail here.
Fig. 10 is a block diagram illustrating a terminal according to an example embodiment. The terminal 1000 is used for executing the steps executed by the terminal in the above embodiments, and may be a portable mobile terminal, such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1000 can also be referred to as user equipment, portable terminal, laptop terminal, desktop terminal, or the like by other names.
In general, terminal 1000 can include: one or more processors 1001 and one or more memories 1002.
Processor 1001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 1001 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1001 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1001 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 1001 may further include an AI (Artificial Intelligence) processor for processing a computing operation related to machine learning.
Memory 1002 may include one or more computer-readable storage media, which may be non-transitory. The memory 1002 may also include volatile memory or non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1002 is configured to store at least one instruction for being possessed by the processor 1001 to implement the down rate processing methods provided by the method embodiments herein.
In some embodiments, terminal 1000 can also optionally include: a peripheral interface 1003 and at least one peripheral. The processor 1001, memory 1002 and peripheral interface 1003 may be connected by a bus or signal line. Various peripheral devices may be connected to peripheral interface 1003 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1004, touch screen display 1005, camera 1006, audio circuitry 1007, positioning components 1008, and power supply 1009.
The peripheral interface 1003 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 1001 and the memory 1002. In some embodiments, processor 1001, memory 1002, and peripheral interface 1003 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1001, the memory 1002, and the peripheral interface 1003 may be implemented on separate chips or circuit boards, which are not limited by this embodiment.
The Radio Frequency circuit 1004 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1004 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1004 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1004 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1004 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 13G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1004 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1005 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1005 is a touch display screen, the display screen 1005 also has the ability to capture touch signals on or over the surface of the display screen 1005. The touch signal may be input to the processor 1001 as a control signal for processing. At this point, the display screen 1005 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display screen 1005 can be one, providing a front panel of terminal 1000; in other embodiments, display 1005 can be at least two, respectively disposed on different surfaces of terminal 1000 or in a folded design; in still other embodiments, display 1005 can be a flexible display disposed on a curved surface or on a folded surface of terminal 1000. Even more, the display screen 1005 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display screen 1005 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 1006 is used to capture images or video. Optionally, the camera assembly 1006 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1006 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1007 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1001 for processing or inputting the electric signals to the radio frequency circuit 1004 for realizing voice communication. For stereo sound collection or noise reduction purposes, multiple microphones can be provided, each at a different location of terminal 1000. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1001 or the radio frequency circuit 1004 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuit 1007 may also include a headphone jack.
A Location component 1008 is employed to locate a current geographic Location of terminal 1000 for purposes of navigation or LBS (Location Based Service). The Positioning component 1008 may be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
Power supply 1009 is used to supply power to various components in terminal 1000. The power source 1009 may be alternating current, direct current, disposable battery, or rechargeable battery. When the power source 1009 includes a rechargeable battery, the rechargeable battery may support wired charging or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1000 can also include one or more sensors 1010. The one or more sensors 1010 include, but are not limited to: acceleration sensor 1011, gyro sensor 1012, pressure sensor 1013, fingerprint sensor 1014, optical sensor 1015, and proximity sensor 1016.
Acceleration sensor 1011 can detect acceleration magnitudes on three coordinate axes of a coordinate system established with terminal 1000. For example, the acceleration sensor 1011 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1001 may control the touch display screen 1005 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1011. The acceleration sensor 1011 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1012 may detect a body direction and a rotation angle of the terminal 1000, and the gyro sensor 1012 and the acceleration sensor 1011 may cooperate to acquire a 3D motion of the user on the terminal 1000. From the data collected by the gyro sensor 1012, the processor 1001 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1013 may be disposed on a side bezel of terminal 1000 and/or underneath touch display 1005. When pressure sensor 1013 is disposed on a side frame of terminal 1000, a user's grip signal on terminal 1000 can be detected, and processor 1001 performs left-right hand recognition or shortcut operation according to the grip signal collected by pressure sensor 1013. When the pressure sensor 1013 is disposed at a lower layer of the touch display screen 1005, the processor 1001 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1005. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1014 is used to collect a fingerprint of the user, and the processor 1001 identifies the user according to the fingerprint collected by the fingerprint sensor 1014, or the fingerprint sensor 1014 identifies the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 1001 authorizes the user to have relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. Fingerprint sensor 1014 can be disposed on the front, back, or side of terminal 1000. When a physical key or vendor Logo is provided on terminal 1000, fingerprint sensor 1014 can be integrated with the physical key or vendor Logo.
The optical sensor 1015 is used to collect the ambient light intensity. In one embodiment, the processor 1001 may control the display brightness of the touch display screen 1005 according to the intensity of the ambient light collected by the optical sensor 1015. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1005 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1005 is turned down. In another embodiment, the processor 1001 may also dynamically adjust the shooting parameters of the camera assembly 1006 according to the intensity of the ambient light collected by the optical sensor 1015.
Proximity sensor 1016, also known as a distance sensor, is typically disposed on a front panel of terminal 1000. Proximity sensor 1016 is used to gather the distance between the user and the front face of terminal 1000. In one embodiment, when proximity sensor 1016 detects that the distance between the user and the front surface of terminal 1000 gradually decreases, processor 1001 controls touch display 1005 to switch from a bright screen state to a dark screen state; when proximity sensor 1016 detects that the distance between the user and the front of terminal 1000 is gradually increased, touch display screen 1005 is controlled by processor 1001 to switch from a breath-screen state to a bright-screen state.
Those skilled in the art will appreciate that the configuration shown in FIG. 10 is not intended to be limiting and that terminal 1000 can include more or fewer components than shown, or some components can be combined, or a different arrangement of components can be employed.
Fig. 11 is a schematic structural diagram of a server according to an exemplary embodiment, where the server 1100 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1101 and one or more memories 1102, where the memory 1102 stores at least one instruction, and the at least one instruction is loaded and executed by the processors 1101 to implement the methods provided by the above method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
The server 1100 may be configured to perform the steps performed by the resource server in the delivery rate processing method.
In an exemplary embodiment, there is also provided a non-transitory computer readable storage medium, which when executed by a processor of a delivery rate processing apparatus, enables the delivery rate processing apparatus to perform the steps performed by a resource server in the delivery rate processing method.
In an exemplary embodiment, there is also provided a computer program product, which when executed by a processor of a delivery rate processing apparatus, enables the delivery rate processing apparatus to execute the steps performed by the resource server in the delivery rate processing method.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. A method for processing an issue rate is applied to a target resource server in a scheduling system, the scheduling system includes a traffic scheduling server and a plurality of resource servers, the traffic scheduling server is connected with the plurality of resource servers, and the method includes:
acquiring a first issuing rate requested to the traffic scheduling server in a first time period, wherein the issuing rate is the proportion of resource links of the target resource server in resource links issued to at least one terminal by the traffic scheduling server;
acquiring first flow in the first time period, wherein the first flow is generated when the at least one terminal accesses a resource link of the target resource server;
acquiring target flow in a second time period, wherein the second time period is the next time period of the first time period;
determining a second delivery rate requested to the traffic scheduling server within the second time period according to the ratio of the first delivery rate to the first traffic and the target traffic;
and sending the second sending rate to the traffic scheduling server, wherein the traffic scheduling server is used for sending the resource link of the target resource server according to the second sending rate.
2. The method of claim 1, wherein before obtaining the first delivery rate requested from the traffic scheduling server within the first time period, the method further comprises:
and sending the first sending rate to the traffic scheduling server, wherein the traffic scheduling server is used for sending the resource link of the target resource server according to the first sending rate.
3. The method according to claim 1, wherein the determining a second delivery rate requested to the traffic scheduling server within the second time period according to the ratio of the first delivery rate to the first traffic and the target traffic comprises:
determining a second delivery rate requested to the traffic scheduling server within the second time period by adopting the following formula according to the ratio between the first delivery rate and the first traffic and the target traffic:
Figure FDA0003467441450000011
wherein r is 2 Is the second delivery rate, r 1 Is the first delivery rate, b 2 Is the target flow rate, b 1 Is the first flow rate.
4. The method according to claim 1, wherein the determining a second delivery rate requested to the traffic scheduling server within the second time period according to the ratio of the first delivery rate to the first traffic and the target traffic comprises:
determining a second delivery rate requested to the traffic scheduling server within the second time period by adopting the following formula according to the ratio between the first delivery rate and the first traffic and the target traffic:
Figure FDA0003467441450000021
wherein r is 1 Is the first delivery rate, b 1 Is the first flow rate, r 2 Is the second delivery rate, b 2 For the target flow, Δ (·) is a preset function, t is a preset duration, and the preset duration is a time interval between a time when the flow scheduling server issues any resource link and a time when any resource link is accessed.
5. The method according to claim 1, wherein the determining a second delivery rate requested to the traffic scheduling server within the second time period according to the ratio of the first delivery rate to the first traffic and the target traffic comprises:
when the first flow exceeds a preset flow upper limit, determining a second delivery rate requested to the flow scheduling server within the second time period by adopting the following formula according to the proportion between the first delivery rate and the first flow and the target flow:
a 1 =r 1 ×10%;
Figure FDA0003467441450000022
Figure FDA0003467441450000023
wherein r is 1 Is the first delivery rate, r 2 Is the second delivery rate, b 1 Is the first flow rate, b 2 For the target flow rate, a 1 To suppress the first delivery rate, a 2 Avg (a) is another delivery rate obtained by suppressing the first delivery rate 1 +a 2 ) Is a 1 And a 2 P is a preset inhibiting factor, t is a preset duration, and the preset duration is a time interval between a time when the traffic scheduling server issues any resource link and a time when any resource link is accessed.
6. An issue rate processing apparatus, which is applied to a target resource server in a scheduling system, where the scheduling system includes a traffic scheduling server and a plurality of resource servers, and the traffic scheduling server is connected to the plurality of resource servers, the apparatus includes:
the system comprises an issuing rate obtaining unit, a sending rate obtaining unit and a sending unit, wherein the issuing rate obtaining unit is configured to obtain a first issuing rate requested to the traffic scheduling server in a first time period, and the issuing rate is the proportion of resource links of a target resource server in resource links issued to at least one terminal by the traffic scheduling server;
a first traffic obtaining unit configured to obtain first traffic in the first time period, where the first traffic is generated when the at least one terminal accesses a resource link of the target resource server;
a second flow rate obtaining unit configured to obtain a target flow rate in a second time period, which is a next time period of the first time period;
the delivery rate determining unit is configured to determine a second delivery rate requested to the traffic scheduling server within the second time period according to the ratio between the first delivery rate and the first traffic and the target traffic;
and the second sending unit is configured to send the second delivery rate to the traffic scheduling server, and the traffic scheduling server is configured to deliver the resource link of the target resource server according to the second delivery rate.
7. The apparatus of claim 6, further comprising:
a first sending unit, configured to send the first sending rate to the traffic scheduling server, where the traffic scheduling server is configured to send the resource link of the target resource server according to the first sending rate.
8. The apparatus of claim 6, wherein the down-rate determining unit comprises:
a first determining subunit, configured to determine, according to the ratio between the first delivery rate and the first traffic and the target traffic, a second delivery rate requested to the traffic scheduling server within the second time period by using the following formula:
Figure FDA0003467441450000031
wherein r is 2 Is the second delivery rate, r 1 Is the first delivery rate, b 2 Is the target flow rate, b 1 Is the first flow rate.
9. The apparatus of claim 6, wherein the down-rate determining unit comprises:
a second determining subunit, configured to determine, according to the ratio between the first delivery rate and the first traffic and the target traffic, a second delivery rate requested to the traffic scheduling server within the second time period by using the following formula:
Figure FDA0003467441450000032
wherein r is 1 Is the first delivery rate, b 1 Is the first flow rate, r 2 Is the second delivery rate, b 2 For the target flow, Δ (·) is a preset function, t is a preset duration, and the preset duration is a time interval between a time when the flow scheduling server issues any resource link and a time when any resource link is accessed.
10. The apparatus of claim 6, wherein the down-rate determining unit comprises:
a third determining subunit, configured to, when the first traffic exceeds a preset upper traffic limit, determine, according to a ratio between the first delivery rate and the first traffic and the target traffic, a second delivery rate requested to the traffic scheduling server within the second time period by using the following formula:
a 1 =r 1 ×10%;
Figure FDA0003467441450000041
Figure FDA0003467441450000042
wherein r is 1 Is the first delivery rate, r 2 Is the second sending rate, b 1 Is the first flow rate, b 2 For the target flow rate, a 1 To suppress the first delivery rate, a 2 Avg (a) is another delivery rate obtained by suppressing the first delivery rate 1 +a 2 ) Is a 1 And a 2 P is a preset inhibiting factor, t is a preset duration, and the preset duration is a time interval between a time when the traffic scheduling server issues any resource link and a time when any resource link is accessed.
11. A resource server, characterized in that the resource server comprises:
one or more processors;
volatile or non-volatile memory for storing the one or more processor-executable commands;
wherein the one or more processors are configured to perform the down rate processing method of any one of claims 1 to 5.
12. A non-transitory computer readable storage medium having instructions therein which, when executed by a processor of a resource server, enable the resource server to perform the down rate processing method of any one of claims 1 to 5.
CN201910734906.9A 2019-08-09 2019-08-09 Distribution rate processing method, device, server and storage medium Active CN110365545B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910734906.9A CN110365545B (en) 2019-08-09 2019-08-09 Distribution rate processing method, device, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910734906.9A CN110365545B (en) 2019-08-09 2019-08-09 Distribution rate processing method, device, server and storage medium

Publications (2)

Publication Number Publication Date
CN110365545A CN110365545A (en) 2019-10-22
CN110365545B true CN110365545B (en) 2022-08-09

Family

ID=68223654

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910734906.9A Active CN110365545B (en) 2019-08-09 2019-08-09 Distribution rate processing method, device, server and storage medium

Country Status (1)

Country Link
CN (1) CN110365545B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111970132B (en) * 2020-06-29 2023-05-26 百度在线网络技术(北京)有限公司 Control method, device and server for OTA data packet issuing flow

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105306539A (en) * 2015-09-22 2016-02-03 北京金山安全软件有限公司 Service information display control method and device and Internet service information display platform

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6600930B1 (en) * 1997-07-11 2003-07-29 Sony Corporation Information provision system, information regeneration terminal, and server
JP4606333B2 (en) * 2005-09-20 2011-01-05 富士通株式会社 Routing control method
US8649266B2 (en) * 2009-07-27 2014-02-11 Lester F. Ludwig Flow state aware management of QoS with a distributed classifier
CN107046504B (en) * 2016-02-05 2020-08-25 华为技术有限公司 Method and controller for traffic engineering in a communication network
CN107872402B (en) * 2017-11-15 2021-04-09 北京奇艺世纪科技有限公司 Global flow scheduling method and device and electronic equipment
CN108830572B (en) * 2018-06-15 2023-11-14 腾讯科技(深圳)有限公司 Resource transfer method, device, storage medium and equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105306539A (en) * 2015-09-22 2016-02-03 北京金山安全软件有限公司 Service information display control method and device and Internet service information display platform

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Genetic expression programming: A new approach for QoS traffic prediction in EPONs;I-Shyan Hwang等;《2012 Fourth International Conference on Ubiquitous and Future Networks (ICUFN)》;20120809;全文 *
可扩展Spider负载均衡策略的研究与实现;黎才茂等;《计算机与数字工程》;20090220(第02期);全文 *

Also Published As

Publication number Publication date
CN110365545A (en) 2019-10-22

Similar Documents

Publication Publication Date Title
CN108259945B (en) Method and device for processing playing request for playing multimedia data
CN110278464B (en) Method and device for displaying list
CN110134521B (en) Resource allocation method, device, resource manager and storage medium
CN110380904B (en) Bandwidth allocation method and device, electronic equipment and storage medium
CN111479120A (en) Method, device, equipment and storage medium for issuing virtual red packet in live broadcast room
CN110674022A (en) Behavior data acquisition method and device and storage medium
CN109697113B (en) Method, device and equipment for requesting retry and readable storage medium
CN111083516A (en) Live broadcast processing method and device
CN110147503B (en) Information issuing method and device, computer equipment and storage medium
CN111125436A (en) Data management method, device and system
CN110471614B (en) Method for storing data, method and device for detecting terminal
CN110825465B (en) Log data processing method and device, electronic equipment and storage medium
CN110365545B (en) Distribution rate processing method, device, server and storage medium
CN111064657B (en) Method, device and system for grouping concerned accounts
CN109688064B (en) Data transmission method and device, electronic equipment and storage medium
CN113099378B (en) Positioning method, device, equipment and storage medium
CN110213131B (en) Bandwidth determination method and device, computer equipment and storage medium
CN110336881B (en) Method and device for executing service processing request
CN110545299B (en) Content list information acquisition method, content list information providing method, content list information acquisition device, content list information providing device and content list information equipment
CN113949678A (en) Flow control method and device, electronic equipment and computer readable storage medium
CN107948171B (en) User account management method and device
CN110519319B (en) Method and device for splitting partitions
CN111158780A (en) Method, device, electronic equipment and medium for storing application data
CN111526221B (en) Domain name quality determining method, device and storage medium
CN111191254A (en) Access verification method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant