CN103237031A - Method and device for orderly backing to source in content distribution network - Google Patents
Method and device for orderly backing to source in content distribution network Download PDFInfo
- Publication number
- CN103237031A CN103237031A CN201310149174XA CN201310149174A CN103237031A CN 103237031 A CN103237031 A CN 103237031A CN 201310149174X A CN201310149174X A CN 201310149174XA CN 201310149174 A CN201310149174 A CN 201310149174A CN 103237031 A CN103237031 A CN 103237031A
- Authority
- CN
- China
- Prior art keywords
- user
- source station
- request
- server
- limit value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Computer And Data Communications (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The invention provides a method and a device for orderly backing to a source in a content distribution network. The method comprises the following steps of receiving a user request; judging whether the current processing load of a source station server reaches a preset limit value if the user request needs to back to the source; sequencing the user request for wait if the current processing load of the source station server reaches the preset limit value; and processing the user request in the first tank by the source station server in advance if the processing load of the source station server is lower than the preset limit value. Due to the adoption of the method and device, the pressure of the source station server can be alleviated, and the response efficiency of the back-to-source request of the user can be improved.
Description
Technical field
The present invention relates to the source side method of returning in the content distributing network, relate in particular to a kind of time source side method and device in order in content distributing network.
Background technology
Content distributing network (CDN) needs directly to return the content that the source station server reaches user's needs for some reason when the request that receives the user.In the prior art, content distributing network is not done special processing in request Hui Yuan, all is that equality is handled when each user asks go back to the source.
Fig. 1 shows a kind of time source processing method in the prior art, comprising: at the S11 place, client is sent a plurality of user's requests; At the S12 place, the fringe node server receives user's request; At the S13 place, judge whether this user request needs Hui Yuan, if do not need, then goes to S14, by this request of fringe node server process and respond; Hui Yuan then advances to S15 if desired, by source station server process request, and return results is transferred to the fringe node server and is back to client.
Adopt traditional processing mode, when client requests quantity is big, during activities such as for example e-commerce website purchases by group, special price, flash sale, the visit capacity of website can present the situation of sudden increase at short notice, needing simultaneously to cause the client requests quantity of Hui Yuan to exceed the scope that the source station server can bear, cause the source station server that whole user's requests is all responded slowly even the machine of delaying.
Summary of the invention
The technical problem to be solved in the present invention provides a kind of time source side method and device in order in content distributing network, is conducive to reduce the pressure of source station server, improves user Hui Yuan request responding efficient.
For solving the problems of the technologies described above, the invention provides a kind of source side method of returning in order in content distributing network, comprising:
Receive user's request;
If this user's request needs Hui Yuan, judge then whether server current processing load in source station reaches default limit value;
If server current processing load in described source station reaches this default limit value, then this user is asked to sort wait;
When the processing load of described source station server was lower than this default limit value, described source station server asked preferentially to handle to ordering user at first.
According to one embodiment of present invention, this method also comprises: if this user's request does not need Hui Yuan, then ask and return response results by this user of fringe node server process.
According to one embodiment of present invention, this method also comprises: if server current processing load in described source station does not reach this default limit value, then visit described source station server and obtain desired data.
According to one embodiment of present invention, this user request is sorted etc. to be included: this user's request is added into waiting list according to the time order and function order.
According to one embodiment of present invention, this method also comprises: if the processing of described source station server load is lower than user's request of this limit value of presetting and current nothing ordering wait, then new user is asked directly processing and need not the ordering wait.
The present invention also provides a kind of time source apparatus in order in content distributing network, comprising:
User interface section receives user's request;
Judging unit if this user's request needs Hui Yuan, judges then whether server current processing load in source station reaches default limit value;
Sequencing unit if server current processing load in described source station reaches this default limit value, then asks to sort wait to this user;
First processing unit when the processing load of described source station server is lower than this default limit value, is asked preferentially to transfer to described source station server with ordering user at first and is handled.
According to one embodiment of present invention, this device also comprises: second processing unit, if this user's request does not need Hui Yuan, then this user's request is transferred to the fringe node server process and returned response results.
According to one embodiment of present invention, this device also comprises: the 3rd processing unit, if server current processing load in described source station does not reach this default limit value, then visit described source station server and obtain desired data.
According to one embodiment of present invention, described sequencing unit is added into waiting list with this user's request according to the time order and function order.
According to one embodiment of present invention, this device also comprises: manages the unit everywhere, if the processing of described source station server load is lower than user's request of this limit value of presetting and current nothing ordering wait, then new user is asked directly processing and need not the ordering wait.
Compared with prior art, the present invention has the following advantages:
In the source side method of returning in order and device of the embodiment of the invention, when the user asks to need go back to the source, judge at first whether server current processing load in source station has reached default limit value, if reach, then the user to request Hui Yuan asks to sort wait, when the processing load of source station server is lower than limit value, user at first asks preferentially to handle to ordering, if the user of ordering wait does not ask then new user is asked directly to handle and need not the ordering wait, thereby avoided the source station server that too much client requests is handled simultaneously, reduce the pressure of source station server, also improved time source request responding efficient.
Description of drawings
Fig. 1 is the schematic flow sheet of a kind of time source processing of request method in the prior art;
Fig. 2 is the schematic flow sheet of the source side method of returning in order in content distributing network of the embodiment of the invention;
Fig. 3 is the structured flowchart that returns source apparatus in order in content distributing network of the embodiment of the invention.
Embodiment
The invention will be further described below in conjunction with specific embodiments and the drawings, but should not limit protection scope of the present invention with this.
Present embodiment returns the source side method in order and mainly may further comprise the steps:
Receive user's request, for example the fringe node server receives a plurality of user's requests from client;
If this user's request does not need Hui Yuan, then directly it is handled by the fringe node server;
If this user's request needs Hui Yuan, judge then whether server current processing load in source station reaches default limit value;
If this source station server does not reach default being limited to, then by the source station server it is handled;
If server current processing load in described source station reaches this default limit value, then this user is asked to sort wait;
When the processing load of described source station server was lower than this default limit value, described source station server asked preferentially to handle to ordering user at first.
Below with reference to Fig. 2, in conjunction with an example this time source side method is elaborated.
At the S21 place, client is sent a plurality of user's requests, and these a plurality of user's requests can be the HTTP requests from a plurality of different clients.
At the S22 place, the fringe node server receives request.
At the S23 place, judge whether the client requests that receives needs Hui Yuan.For example, judge whether client requests resource pointed needs the access originator site server to obtain.
If this client requests does not need Hui Yuan, for example the resource of client requests sensing has corresponding cache file in the fringe node server, then goes to S24, is responded by the fringe node server process, for example, the fringe node server directly returns to user side with the file of local cache.
If this client requests needs Hui Yuan, then advance to S25, judge whether server current processing load in source station reaches default limit value.This limit value can be predefined according to the actual treatment ability of source station server, for example the client requests quantity that can handle concomitantly simultaneously of source station server.
If server current processing load in source station does not reach this limit value as yet, then go to S28, handle this client requests by the source station processor, for example inquire about the resource that this client requests is pointed to, and advance to S24, transfer to the fringe node server and respond to client.
If server current processing load in source station has reached this limit value, then advance to S26, time order and function rank order according to client requests is waited for, for example, user's request can be added in the waiting list according to the time order and function order, time client requests early comes position more preceding in the waiting list, the position after later client requests of time comes in the waiting list.
Advance to S27 afterwards, judge whether the source station server can receive request, also be whether server current processing load in source station is less than this limit value, if words, then take out the client requests of waiting list head of the queue, and and then advance to S28, by the source station server client requests of taking out is handled, and result returned to the fringe node server, the fringe node server returns to response the user side that sends this client requests again; If handle load not yet less than this limit value, then return S26, continue to wait for.
In addition, be lower than limit value and current waiting list when empty at the processing of source station server load, can ask new user directly to handle, and not need to wait in line.
Need to prove, according to the time order and function order user is asked to sort wait in the above-mentioned example, but be not limited to this, for example can also come the user is asked to sort wait according to the priority of client requests, before the higher client requests of priority comes, can obtain priority treatment.
Fig. 3 shows the structured flowchart that returns in order source apparatus in content distributing network of present embodiment, comprising: user interface section 31, judging unit 32, sequencing unit 33, first processing unit 34, second processing unit 35, the 3rd processing unit 36, the are managed unit 37 everywhere.
Wherein, user interface section 31 is used for receiving the user's request from client.Whether judging unit 32 needs Hui Yuan in this client requests, and judges when needs go back to the source whether server current processing load in source station has reached default limit value.If this user request does not need Hui Yuan, then second processing unit 35 is transferred to the fringe node server with user's request and is handled and response results is returned to client.
If server current processing load in source station does not reach default limit value, then the 3rd processing unit 36 these source station servers of visit obtain required data, for example the resource data of client requests sensing.If the current processing of source station server load has reached default limit value, then 33 pairs of users' requests of sequencing unit wait of sorting for example can be according to the wait of sorting of time order and function order.When the processing load of source station server is lower than default limit value, to sort at first user asks preferentially to give the source station server and handles first processing unit 34, the source station server is transferred to the fringe node server with result, and further returns to the client by the fringe node server.If the processing of source station server load is lower than user's request (for example waiting list is for empty) that default limit value and current ordering are waited for, then new user is asked directly to handle and need not ordering and wait for.
About more detailed contents of this time source apparatus, please refer in the previous embodiment about returning the detailed description of source side method.
Though the present invention with preferred embodiment openly as above; but it is not to limit the present invention; any those skilled in the art without departing from the spirit and scope of the present invention; can make possible change and modification, so protection scope of the present invention should be as the criterion with the scope that claim of the present invention was defined.
Claims (10)
1. the source side method of returning in order in content distributing network is characterized in that, comprising:
Receive user's request;
If this user's request needs Hui Yuan, judge then whether server current processing load in source station reaches default limit value;
If server current processing load in described source station reaches this default limit value, then this user is asked to sort wait;
When the processing load of described source station server was lower than this default limit value, described source station server asked preferentially to handle to ordering user at first.
2. method according to claim 1 is characterized in that, also comprises: if this user's request does not need Hui Yuan, then ask and return response results by this user of fringe node server process.
3. method according to claim 1 is characterized in that, also comprises: if server current processing load in described source station does not reach this default limit value, then visit described source station server and obtain desired data.
4. method according to claim 1 is characterized in that, it is to be included that this user request is sorted etc.: this user's request is added into waiting list according to the time order and function order.
5. method according to claim 1 is characterized in that, also comprises: if the processing of described source station server load is lower than user's request of this limit value of presetting and current nothing ordering wait, then new user is asked directly processing and need not the ordering wait.
6. time source apparatus in order in content distributing network is characterized in that, comprising:
User interface section receives user's request;
Judging unit if this user's request needs Hui Yuan, judges then whether server current processing load in source station reaches default limit value;
Sequencing unit if server current processing load in described source station reaches this default limit value, then asks to sort wait to this user;
First processing unit when the processing load of described source station server is lower than this default limit value, is asked preferentially to transfer to described source station server with ordering user at first and is handled.
7. orderly time source apparatus according to claim 6 is characterized in that, also comprises:
Second processing unit if this user's request does not need Hui Yuan, is then transferred to this user's request the fringe node server process and is returned response results.
8. orderly time source apparatus according to claim 6 is characterized in that, also comprises:
The 3rd processing unit if server current processing load in described source station does not reach this default limit value, is then visited described source station server and is obtained desired data.
9. orderly time source apparatus according to claim 6 is characterized in that, described sequencing unit is added into waiting list with this user's request according to the time order and function order.
10. orderly time source apparatus according to claim 6 is characterized in that, also comprises:
The manages the unit everywhere, if the processing of described source station server load is lower than user's request that this default limit value and the ordering of current nothing are waited for, then new user is asked directly to handle and need not ordering and wait for.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310149174.XA CN103237031B (en) | 2013-04-26 | 2013-04-26 | Time source side method and device in order in content distributing network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310149174.XA CN103237031B (en) | 2013-04-26 | 2013-04-26 | Time source side method and device in order in content distributing network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103237031A true CN103237031A (en) | 2013-08-07 |
CN103237031B CN103237031B (en) | 2016-04-20 |
Family
ID=48885048
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310149174.XA Active CN103237031B (en) | 2013-04-26 | 2013-04-26 | Time source side method and device in order in content distributing network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103237031B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105025105A (en) * | 2015-07-27 | 2015-11-04 | 广州华多网络科技有限公司 | Request handling method and device |
CN105246052A (en) * | 2015-10-14 | 2016-01-13 | 中国联合网络通信集团有限公司 | Data distribution method and device |
CN106572166A (en) * | 2016-11-02 | 2017-04-19 | 广东欧珀移动通信有限公司 | Data transmission method, backup server and mobile terminal |
CN109005118A (en) * | 2018-08-21 | 2018-12-14 | 中国平安人寿保险股份有限公司 | Search method, apparatus, computer equipment and the storage medium of CDN source station address |
CN110392074A (en) * | 2018-04-19 | 2019-10-29 | 贵州白山云科技股份有限公司 | A kind of dispatching method and device accelerated based on dynamic |
CN110636104A (en) * | 2019-08-07 | 2019-12-31 | 咪咕视讯科技有限公司 | Resource request method, electronic device and storage medium |
CN110858844A (en) * | 2018-08-22 | 2020-03-03 | 阿里巴巴集团控股有限公司 | Service request processing method, control method, device, system and electronic equipment |
CN110933467A (en) * | 2019-12-02 | 2020-03-27 | 腾讯科技(深圳)有限公司 | Live broadcast data processing method and device and computer readable storage medium |
CN115250294A (en) * | 2021-04-25 | 2022-10-28 | 贵州白山云科技股份有限公司 | Data request processing method based on cloud distribution, and system, medium and equipment thereof |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101406025A (en) * | 2006-03-28 | 2009-04-08 | 汤姆森许可贸易公司 | Centralization type scheduling device aiming at content transmission network |
CN102594921A (en) * | 2012-03-22 | 2012-07-18 | 网宿科技股份有限公司 | Synchronization file access method and system based on content distribution system |
CN102790798A (en) * | 2012-05-23 | 2012-11-21 | 蓝汛网络科技(北京)有限公司 | Transparent proxy implementation method, device and system in content distribution network |
CN102970381A (en) * | 2012-12-21 | 2013-03-13 | 网宿科技股份有限公司 | Multi-source load balance method and system for proportional polling based on content distribution network |
-
2013
- 2013-04-26 CN CN201310149174.XA patent/CN103237031B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101406025A (en) * | 2006-03-28 | 2009-04-08 | 汤姆森许可贸易公司 | Centralization type scheduling device aiming at content transmission network |
CN102594921A (en) * | 2012-03-22 | 2012-07-18 | 网宿科技股份有限公司 | Synchronization file access method and system based on content distribution system |
CN102790798A (en) * | 2012-05-23 | 2012-11-21 | 蓝汛网络科技(北京)有限公司 | Transparent proxy implementation method, device and system in content distribution network |
CN102970381A (en) * | 2012-12-21 | 2013-03-13 | 网宿科技股份有限公司 | Multi-source load balance method and system for proportional polling based on content distribution network |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105025105B (en) * | 2015-07-27 | 2018-10-30 | 广州华多网络科技有限公司 | request processing method and device |
CN105025105A (en) * | 2015-07-27 | 2015-11-04 | 广州华多网络科技有限公司 | Request handling method and device |
CN105246052A (en) * | 2015-10-14 | 2016-01-13 | 中国联合网络通信集团有限公司 | Data distribution method and device |
CN105246052B (en) * | 2015-10-14 | 2018-08-03 | 中国联合网络通信集团有限公司 | A kind of method and device of data distribution |
CN106572166B (en) * | 2016-11-02 | 2019-07-05 | Oppo广东移动通信有限公司 | Data transmission method, backup server and mobile terminal |
CN106572166A (en) * | 2016-11-02 | 2017-04-19 | 广东欧珀移动通信有限公司 | Data transmission method, backup server and mobile terminal |
CN110392074A (en) * | 2018-04-19 | 2019-10-29 | 贵州白山云科技股份有限公司 | A kind of dispatching method and device accelerated based on dynamic |
CN110392074B (en) * | 2018-04-19 | 2022-05-17 | 贵州白山云科技股份有限公司 | Scheduling method and device based on dynamic acceleration |
CN109005118A (en) * | 2018-08-21 | 2018-12-14 | 中国平安人寿保险股份有限公司 | Search method, apparatus, computer equipment and the storage medium of CDN source station address |
CN110858844A (en) * | 2018-08-22 | 2020-03-03 | 阿里巴巴集团控股有限公司 | Service request processing method, control method, device, system and electronic equipment |
CN110636104A (en) * | 2019-08-07 | 2019-12-31 | 咪咕视讯科技有限公司 | Resource request method, electronic device and storage medium |
CN110933467A (en) * | 2019-12-02 | 2020-03-27 | 腾讯科技(深圳)有限公司 | Live broadcast data processing method and device and computer readable storage medium |
CN115250294A (en) * | 2021-04-25 | 2022-10-28 | 贵州白山云科技股份有限公司 | Data request processing method based on cloud distribution, and system, medium and equipment thereof |
CN115250294B (en) * | 2021-04-25 | 2024-03-22 | 贵州白山云科技股份有限公司 | Cloud distribution-based data request processing method and system, medium and equipment thereof |
Also Published As
Publication number | Publication date |
---|---|
CN103237031B (en) | 2016-04-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103237031A (en) | Method and device for orderly backing to source in content distribution network | |
CN103716251B (en) | For the load-balancing method and equipment of content distributing network | |
CN106209682B (en) | Business scheduling method, device and system | |
CN107909261B (en) | Order pushing method and device | |
CN109145020A (en) | Information query method, from server, client and computer readable storage medium | |
CN106790552B (en) | A kind of content providing system based on content distributing network | |
CN106657248A (en) | Docker container based network load balancing system and establishment method and operating method thereof | |
CN104618164A (en) | Management method for rapid cloud computing platform application deployment | |
CN105005611B (en) | A kind of file management system and file management method | |
CN103281367A (en) | Load balance method and device | |
US20160035005A1 (en) | Online cart and shopping list sharing | |
WO2019128357A1 (en) | Picture requesting method, method for responding to picture request, and client | |
CN105554085B (en) | A kind of dynamic timeout treatment method and apparatus based on server connection | |
CN104601534A (en) | Method and system for processing CDN system images | |
CN102710535A (en) | Data acquisition method and equipment | |
CN108881651A (en) | Data processing method, device and equipment of call platform and storage medium | |
CN104519088A (en) | Buffer memory system realization method and buffer memory system | |
CN105897865B (en) | A kind of network file service management system and method that agreement is unrelated | |
CN105187514A (en) | Management method for cloud application and system thereof | |
CN107332703B (en) | Method and device for checking multi-application logs | |
CN104468710A (en) | Mixed big data processing system and method | |
CN107634854B (en) | Service data processing method and device | |
CN104852964A (en) | Multifunctional server scheduling method | |
CN105915610A (en) | Asynchronous communication method and device | |
CN107045452B (en) | Virtual machine scheduling method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |