CN106210028A - A kind of server prevents method, server and the system of overload - Google Patents

A kind of server prevents method, server and the system of overload Download PDF

Info

Publication number
CN106210028A
CN106210028A CN201610526094.5A CN201610526094A CN106210028A CN 106210028 A CN106210028 A CN 106210028A CN 201610526094 A CN201610526094 A CN 201610526094A CN 106210028 A CN106210028 A CN 106210028A
Authority
CN
China
Prior art keywords
concurrent request
server
storage server
handling capacity
request amount
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610526094.5A
Other languages
Chinese (zh)
Other versions
CN106210028B (en
Inventor
罗少奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Huaduo Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huaduo Network Technology Co Ltd filed Critical Guangzhou Huaduo Network Technology Co Ltd
Priority to CN201610526094.5A priority Critical patent/CN106210028B/en
Publication of CN106210028A publication Critical patent/CN106210028A/en
Application granted granted Critical
Publication of CN106210028B publication Critical patent/CN106210028B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)

Abstract

The embodiment of the invention discloses a kind of server and prevent method, server and the system of overload, solve current distributed memory system when high capacity, due to the technical problem of the storage server crash that excessive concurrent request amount causes.Embodiment of the present invention server prevents the method for overload from including: constantly send concurrent request to storage server according to initial concurrent request amount window;Calculate the handling capacity of storage server according to the bag quantity of returning that storage server returns, and determine actual concurrent request amount window according to handling capacity;Using actual concurrent request amount window size as concurrent request threshold value, until the concurrent request sent reaches concurrent request threshold value, then stop sending concurrent request to storage server.

Description

A kind of server prevents method, server and the system of overload
Technical field
The present invention relates to technical field of data processing, particularly relate to a kind of server prevent overload method, server and System.
Background technology
Distributed memory system, is data dispersion to be stored on the equipment of many platform independent.Traditional network store system The all data of storage server repository concentrated, storage server are used to become the bottleneck of systematic function, be also reliability and peace The focus of full property, it is impossible to meet the needs of Mass storage application.Distributed network storage system uses extendible system knot Structure, utilizes multiple stage storage server to share storage load, utilizes location server to position storage information, and it not only increases system Reliability, availability and access efficiency, be also easy to extension.
Current distributed memory system is two-tiered structure, and ground floor is proxy server, and the second layer is message synchronization collection Group and storage service cluster, the handling capacity when high capacity, it will collapse owing to excessive concurrent request amount causes storing server The technical problem burst.
Summary of the invention
A kind of server that the embodiment of the present invention provides prevents method, server and the system of overload, solves current Distributed memory system is when high capacity, due to the technical problem of the storage server crash that excessive concurrent request amount causes.
A kind of server that the embodiment of the present invention provides prevents the method for overload, including:
Constantly concurrent request is sent to storage server according to initial concurrent request amount window;
Calculate the handling capacity of storage server according to the bag quantity of returning that described storage server returns, and gulp down according to described The amount of telling determines actual concurrent request amount window;
Using described actual concurrent request amount window size as concurrent request threshold value, until the described concurrent request sent reaches To described concurrent request threshold value, then stop sending described concurrent request to described storage server.
Alternatively, constantly also wrapped before storage server sends concurrent request according to initial concurrent request amount window size Include:
The size arranging described initial concurrent request amount window is infinity.
Alternatively, time bag quantity returned according to described storage server calculates the handling capacity of storage server and specifically wraps Include:
The bag quantity of returning returned by preset time period and the described concurrent request that sent calculates storage server Handling capacity.
Alternatively, determine that actual concurrent request amount window specifically includes according to described handling capacity:
When the request number of times of described concurrent request reaches preset request number of times threshold values, it is determined that described storage server is High capacity;
The described handling capacity currently calculated is set to presently described actual concurrent request amount window.
Alternatively, the described handling capacity currently calculated is set to presently described actual concurrent request amount window specifically wrap Include:
Continuous during described storage server sends concurrent request, the described handling capacity to described storage server Calculate in real time;
Judge that the described handling capacity calculated in real time whether more than described actual concurrent request amount window, the most then updates Described actual concurrent request amount window is the described handling capacity calculated in real time.
A kind of server that the embodiment of the present invention provides, including:
Concurrent request transmitting element, concurrently please for constantly sending to storage server according to initial concurrent request amount window Ask;
Computing unit, calculates handling up of storage server for the bag quantity of returning returned according to described storage server Amount, and determine actual concurrent request amount window according to described handling capacity;
Determine system load capacity unit, for using described actual concurrent request amount window size as concurrent request threshold Value, until the described concurrent request sent reaches described concurrent request threshold value, then stops sending described concurrent request and deposits to described Storage server.
Alternatively, server also includes:
Unit is set, is infinity for arranging the size of described initial concurrent request amount window.
Alternatively, computing unit specifically includes:
Computation subunit, is calculated for the bag quantity of returning returned by preset time period and the described concurrent request that sent Go out to store the handling capacity of server.
Alternatively, computing unit the most also includes:
High capacity determines subelement, is used for when the request number of times of described concurrent request reaches preset request number of times threshold values, Then determine that described storage server is high capacity;
Actual concurrent request amount determines subelement, for the described handling capacity currently calculated is set to presently described reality Border concurrent request amount window.
Alternatively, actual concurrent request amount determines that subelement specifically includes:
Computing module in real time, is used for continuous during described storage server sends concurrent request, to described storage The described handling capacity of server calculates in real time;
Judge module, for judging that whether the described handling capacity calculated in real time is more than described actual concurrent request amount window Mouthful, the most then updating described actual concurrent request amount window is the described handling capacity calculated in real time.
A kind of server that the embodiment of the present invention provides prevents the system of overload, including:
Any one the described server mentioned in several storage servers, and the present embodiment;
Several storage servers are set up with described server communication connection relation.
As can be seen from the above technical solutions, the embodiment of the present invention has the advantage that
A kind of server that the embodiment of the present invention provides prevents method, server and the system of overload, and wherein, server is prevented Only the method for overload includes: constantly send concurrent request to storage server according to initial concurrent request amount window;According to storage Time bag quantity that server returns calculates the handling capacity of storage server, and determines actual concurrent request amount window according to handling capacity Mouthful;Using actual concurrent request amount window size as concurrent request threshold value, until the concurrent request sent reaches concurrent request threshold Value, then stop sending concurrent request to storage server.In the present embodiment, by according to initial concurrent request amount window constantly to Storage server sends concurrent request, then calculates handling up of storage server according to the bag quantity of returning that storage server returns Amount, and determine actual concurrent request amount window according to handling capacity, finally using actual concurrent request amount window size as concurrently please Seek threshold value, until the concurrent request sent reaches concurrent request threshold value, then stop sending concurrent request to storage server, it is achieved The load capacity of first assessment storage server, controls the traffic volume of concurrent request, solves current further according to load capacity Distributed memory system is when high capacity, due to the technical problem of the storage server crash that excessive concurrent request amount causes.
Accompanying drawing explanation
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing In having technology to describe, the required accompanying drawing used is briefly described, it should be apparent that, the accompanying drawing in describing below is only this Some embodiments of invention, for those of ordinary skill in the art, on the premise of not paying creative work, also may be used To obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 prevents the flow process signal of an embodiment of the method for overload for a kind of server that the embodiment of the present invention provides Figure;
Fig. 2 prevents the flow process of another embodiment of the method for overload from showing for a kind of server that the embodiment of the present invention provides It is intended to;
The structural representation of one embodiment of a kind of server that Fig. 3 provides for the embodiment of the present invention;
The structural representation of another embodiment of a kind of server that Fig. 4 provides for the embodiment of the present invention;
Fig. 5 prevents the structural representation of an embodiment of the system of overload for a kind of server that the embodiment of the present invention provides Figure;
Fig. 6 is distributed memory system configuration diagram.
Detailed description of the invention
A kind of server that the embodiment of the present invention provides prevents method, server and the system of overload, solves current Distributed memory system is when high capacity, due to the technical problem of the storage server crash that excessive concurrent request amount causes.
For making the goal of the invention of the present invention, feature, the advantage can be the most obvious and understandable, below in conjunction with the present invention Accompanying drawing in embodiment, is clearly and completely described the technical scheme in the embodiment of the present invention, it is clear that disclosed below Embodiment be only a part of embodiment of the present invention, and not all embodiment.Based on the embodiment in the present invention, this area All other embodiments that those of ordinary skill is obtained under not making creative work premise, broadly fall into present invention protection Scope.
Referring to Fig. 1, a kind of server that the embodiment of the present invention provides prevents the embodiment of the method for overload from including:
101, constantly concurrent request is sent to storage server according to initial concurrent request amount window;
In the present embodiment, when the disposal ability of needs assessment whole distributed memory system system is in case locking system collapses, Constantly concurrent request is sent to storage server firstly the need of according to initial concurrent request amount window.
102, the bag quantity of returning returned according to storage server calculates the handling capacity of storage server, and according to handling capacity Determine actual concurrent request amount window;
When constantly sending after concurrent request to storage server according to initial concurrent request amount window, need according to storage Time bag quantity that server returns calculates the handling capacity of storage server, and determines actual concurrent request amount window according to handling capacity Mouthful.
103, using actual concurrent request amount window size as concurrent request threshold value, until the concurrent request sent reaches also Send out request threshold value, then stop sending concurrent request to storage server.
When the bag quantity of returning returned according to storage server calculates the handling capacity of storage server and true according to handling capacity After fixed actual concurrent request amount window, need using actual concurrent request amount window size as concurrent request threshold value, until sending out The concurrent request sent reaches concurrent request threshold value, then stop sending concurrent request to storage server.
In the present embodiment, by constantly sending concurrent request to storage server according to initial concurrent request amount window, so The bag quantity of returning returned according to storage server afterwards calculates the handling capacity of storage server, and determines that reality is also according to handling capacity Send out request amount window, finally using actual concurrent request amount window size as concurrent request threshold value, until the concurrent request sent Reach concurrent request threshold value, then stop sending concurrent request to storage server, it is achieved that the load of first assessment storage server Ability, controls the traffic volume of concurrent request further according to load capacity, solves current distributed memory system when high capacity, Technical problem due to the storage server crash that excessive concurrent request amount causes.
The above is that the process of the method that server prevents overload is described in detail, and will carry out detailed process below Detailed description, refers to Fig. 2, and a kind of server that the embodiment of the present invention provides prevents another embodiment of the method for overload Including:
201, the size arranging initial concurrent request amount window is infinity;
In the present embodiment, when the disposal ability of needs assessment whole distributed memory system system is in case locking system collapses, It is infinity firstly the need of the size arranging initial concurrent request amount window.
202, constantly concurrent request is sent to storage server according to initial concurrent request amount window;
After the size arranging initial concurrent request amount window is infinity, need according to initial concurrent request amount window Constantly send concurrent request to storage server.
It should be noted that concurrent request amount window represents the quantity that can transmit a request to external service simultaneously.
203, the bag quantity of returning returned by preset time period and the concurrent request that sent calculates storage server Handling capacity;
When constantly sending after concurrent request to storage server according to initial concurrent request amount window, need by preset The bag quantity of returning that time period and the concurrent request sent return calculates the handling capacity of storage server.
204, when the request number of times of concurrent request reaches preset request number of times threshold values, it is determined that storage server is high negative Carry;
When the bag quantity of returning returned by preset time period and the concurrent request that sent calculates gulping down of storage server After the amount of telling, if the request number of times of concurrent request reaches preset request number of times threshold values, it is determined that storage server is high capacity.
205, constantly during storage server sends concurrent request, the handling capacity of storage server is carried out in real time Calculate;
When the request number of times of concurrent request reaches preset request number of times threshold values, it is determined that storage server be high capacity it After, need to be set to the handling capacity currently calculated currently practical concurrent request amount window, concrete currently calculate gulp down The amount of telling is set to currently practical concurrent request amount window, can be continuous during storage server sends concurrent request, The handling capacity of storage server is calculated in real time.
206, judge that the handling capacity calculated in real time whether more than actual concurrent request amount window, the most then performs step 207;
When constantly during storage server sends concurrent request, the handling capacity of storage server is counted in real time After calculation, need to judge that the handling capacity calculated in real time whether more than actual concurrent request amount window, the most then performs step 207。
207, updating actual concurrent request amount window is the handling capacity calculated in real time;
When judging that the handling capacity calculated in real time is greater than actual concurrent request amount window, then update actual concurrent request amount Window is the handling capacity calculated in real time.
208, using actual concurrent request amount window size as concurrent request threshold value, until the concurrent request sent reaches also Send out request threshold value, then stop sending concurrent request to storage server.
When the bag quantity of returning returned according to storage server calculates the handling capacity of storage server and true according to handling capacity After fixed actual concurrent request amount window, need using actual concurrent request amount window size as concurrent request threshold value, until sending out The concurrent request sent reaches concurrent request threshold value, then stop sending concurrent request to storage server.
It is described in detail with a concrete application scenarios process antioverloading to server below, as shown in Figure 6, application Example includes:
The framework of Fig. 6 is actual is that distributed memory system horsetable (packet type framework form) uses two-tiered structure, Ground floor service is horse_proxy (proxy server);Second layer service be sync_broker (message synchronization cluster) and Storage (storage service cluster).
In this distributed memory system, horse_proxy process performance is substantially better than sync_broker and storage, Therefore the performance bottle of system is tightly at sync_broker and storage.Horse_proxy service is by calculating external service Handling capacity when (sync_broker and storage) is in high capacity is to control concurrent request amount window, thus prevents excessive Concurrent request piezometric is across external service, thus ensures the normal service ability of whole system.Wherein handling capacity refers to the unit interval Interior treatable number of request;The request amount that concurrent request sends in referring to the unit interval.
Below step is horse_proxy and as a example by storage service:
(1) concurrent request amount window W represents the quantity that can simultaneously transmit a request to external service, is used for assessing outside clothes The ability that business can process simultaneously.Concurrent request amount window W is initialized to infinity, is not restricted;horse_ Proxy service constantly sends request to storage service with W window size.
(2) horse_proxy service calculates storage service by the bag amount R of returning of T in a period of time (such as 10 seconds) Handling capacity: handling capacity=R/T
(3) when request timed out number of times reaches certain threshold values V, horse_proxy service thinks that storage service is in height Load, and using handling capacity now as concurrent request amount window W;
(4) concurrently please as long as handling capacity just updates by handling capacity more than concurrent request amount window W, horse_proxy service The amount of asking window W;
(5) ask as long as concurrent request amount window W just can send to storage service less than, horse_proxy service Ask, expired and just stopped sending.
In (3) if in V the least, external service may be mistakenly considered and be in high capacity, if too big, may not and Shi Faxian storage service is in the state of high capacity, so V-value will be with arranging in balance according to practical situation.
Horse_proxy service service ability be better than under storage service scenario, it is ensured that do not press across Storage services and keeps best service ability, thus ensures whole system nonoverload, thus quits work.Need explanation , fluctuation may be produced when state changes, but this fluctuation is within one or two T second, and state change frequency is few.
In horsetable two-tiered structure system, ground floor service processing performance is utilized to be better than the situation of second layer service Under, ground floor service controls concurrent request amount by assessment second layer service ability, thus ensures the positive informal dress of second service Business ability, the final normal service ability ensureing whole system, even if in the case of an overload.
In the present embodiment, by constantly sending concurrent request to storage server according to initial concurrent request amount window, so The bag quantity of returning returned according to storage server afterwards calculates the handling capacity of storage server, and determines that reality is also according to handling capacity Send out request amount window, finally using actual concurrent request amount window size as concurrent request threshold value, until the concurrent request sent Reach concurrent request threshold value, then stop sending concurrent request to storage server, it is achieved that the load of first assessment storage server Ability, controls the traffic volume of concurrent request further according to load capacity, solves current distributed memory system when high capacity, Due to the technical problem of the storage server crash that excessive concurrent request amount causes, and constantly calculate handling capacity with current The data of concurrent request amount window determine last actual concurrent request amount window, the service ability of more accurate system Assessment, it is ensured that the normal service ability of whole system.
Referring to Fig. 3, an embodiment of a kind of server that the embodiment of the present invention provides includes:
Concurrent request transmitting element 301, for constantly sending also to storage server according to initial concurrent request amount window Send out request;
Computing unit 302, calculates the handling capacity of storage server for the bag quantity of returning returned according to storage server, And determine actual concurrent request amount window according to handling capacity;
Determine system load capacity unit 303, be used for using actual concurrent request amount window size as concurrent request threshold value, Until the concurrent request sent reaches concurrent request threshold value, then stop sending concurrent request to storage server.
In the present embodiment, by concurrent request transmitting element 301 according to initial concurrent request amount window constantly to storage clothes Business device sends concurrent request, and the bag quantity of returning that then computing unit 302 returns according to storage server calculates storage server Handling capacity, and determine actual concurrent request amount window according to handling capacity, finally determine that system load capacity unit 303 is with reality Concurrent request amount window size, as concurrent request threshold value, until the concurrent request sent reaches concurrent request threshold value, then stops Send concurrent request to storage server, it is achieved that the load capacity of first assessment storage server, further according to load capacity control The traffic volume of concurrent request, solves current distributed memory system when high capacity, owing to excessive concurrent request amount is led The technical problem of the storage server crash caused.
The above is that each unit to server is described in detail, and is described in detail by sub-unit below, please Refering to Fig. 4, another embodiment of a kind of server that the embodiment of the present invention provides includes:
Unit 401 is set, is infinity for arranging the size of initial concurrent request amount window.
Concurrent request transmitting element 402, for constantly sending also to storage server according to initial concurrent request amount window Send out request;
Computing unit 403, calculates the handling capacity of storage server for the bag quantity of returning returned according to storage server, And determine actual concurrent request amount window according to handling capacity;
Computing unit 403 specifically includes:
Computation subunit 4031, is calculated for the bag quantity of returning returned by preset time period and the concurrent request that sent Go out to store the handling capacity of server.
High capacity determines subelement 4032, is used for when the request number of times of concurrent request reaches preset request number of times threshold values, Then determine that storage server is high capacity;
Actual concurrent request amount determines subelement 4033, for the handling capacity currently calculated is set to currently practical also Send out request amount window;
Actual concurrent request amount determines that subelement 4033 specifically includes:
Computing module 4031a in real time, is used for, constantly during storage server sends concurrent request, servicing storage The handling capacity of device calculates in real time;
Judge module 4032b, for judging whether the handling capacity calculated in real time is more than actual concurrent request amount window, if It is that then updating actual concurrent request amount window is the handling capacity calculated in real time.
Determine system load capacity unit 404, be used for using actual concurrent request amount window size as concurrent request threshold value, Until the concurrent request sent reaches concurrent request threshold value, then stop sending concurrent request to storage server.
In the present embodiment, by concurrent request transmitting element 402 according to initial concurrent request amount window constantly to storage clothes Business device sends concurrent request, and the bag quantity of returning that then computing unit 403 returns according to storage server calculates storage server Handling capacity, and determine actual concurrent request amount window according to handling capacity, finally determine that system load capacity unit 404 is with reality Concurrent request amount window size, as concurrent request threshold value, until the concurrent request sent reaches concurrent request threshold value, then stops Send concurrent request to storage server, it is achieved that the load capacity of first assessment storage server, further according to load capacity control The traffic volume of concurrent request, solves current distributed memory system when high capacity, owing to excessive concurrent request amount is led The technical problem of the storage server crash caused, and constantly calculate handling capacity and the data of current concurrent request amount window Determine last actual concurrent request amount window, the assessment of the service ability of more accurate system, it is ensured that whole system Normal service ability.
Referring to Fig. 5, a kind of server provided in the embodiment of the present invention prevents an embodiment bag of the system of overload Include:
The server 52 mentioned in several storage servers 51, and Fig. 3 and Fig. 4 embodiment;
Several storage servers 51 have communication connection relation with server 52 foundation.
It should be noted that storage server 51 can be storage service cluster, further comprise message synchronization cluster, Several clients.
In the two-tiered structure of distributed memory system, ground floor service processing performance is utilized to be better than the feelings of second layer service Under condition, ground floor service controls concurrent request amount by assessment second layer service ability, thus ensures the normal of second service Service ability, the final normal service ability ensureing whole system, even if in the case of an overload.
Those skilled in the art is it can be understood that arrive, for convenience and simplicity of description, and the system of foregoing description, The specific works process of device and unit, is referred to the corresponding process in preceding method embodiment, does not repeats them here.
In several embodiments provided herein, it should be understood that disclosed system, apparatus and method are permissible Realize by another way.Such as, device embodiment described above is only schematically, such as, and described unit Dividing, be only a kind of logic function and divide, actual can have other dividing mode, the most multiple unit or assembly when realizing Can in conjunction with or be desirably integrated into another system, or some features can be ignored, or does not performs.Another point, shown or The coupling each other discussed or direct-coupling or communication connection can be the indirect couplings by some interfaces, device or unit Close or communication connection, can be electrical, machinery or other form.
The described unit illustrated as separating component can be or may not be physically separate, shows as unit The parts shown can be or may not be physical location, i.e. may be located at a place, or can also be distributed to multiple On NE.Some or all of unit therein can be selected according to the actual needs to realize the mesh of the present embodiment scheme 's.
It addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it is also possible to It is that unit is individually physically present, it is also possible to two or more unit are integrated in a unit.Above-mentioned integrated list Unit both can realize to use the form of hardware, it would however also be possible to employ the form of SFU software functional unit realizes.
If described integrated unit realizes and as independent production marketing or use using the form of SFU software functional unit Time, can be stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially The part that in other words prior art contributed or this technical scheme completely or partially can be with the form of software product Embodying, this computer software product is stored in a storage medium, including some instructions with so that a computer Equipment (can be personal computer, server, or the network equipment etc.) performs the complete of method described in each embodiment of the present invention Portion or part steps.And aforesaid storage medium includes: USB flash disk, portable hard drive, read only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. are various can store journey The medium of sequence code.
The above, above example only in order to technical scheme to be described, is not intended to limit;Although with reference to front State embodiment the present invention has been described in detail, it will be understood by those within the art that: it still can be to front State the technical scheme described in each embodiment to modify, or wherein portion of techniques feature is carried out equivalent;And these Amendment or replacement, do not make the essence of appropriate technical solution depart from the spirit and scope of various embodiments of the present invention technical scheme.

Claims (11)

1. the method that a server prevents overload, it is characterised in that including:
Constantly concurrent request is sent to storage server according to initial concurrent request amount window;
The handling capacity of storage server is calculated according to the bag quantity of returning that described storage server returns, and according to described handling capacity Determine actual concurrent request amount window;
Using described actual concurrent request amount window size as concurrent request threshold value, until the described concurrent request sent reaches institute State concurrent request threshold value, then stop sending described concurrent request to described storage server.
Server the most according to claim 1 prevents the method for overload, it is characterised in that according to initial concurrent request amount window Mouth size constantly also included before storage server sends concurrent request:
The size arranging described initial concurrent request amount window is infinity.
Server the most according to claim 2 prevents the method for overload, it is characterised in that return according to described storage server Time bag quantity returned calculates the handling capacity of storage server and specifically includes:
The bag quantity of returning returned by preset time period and the described concurrent request that sent calculates handling up of storage server Amount.
Server the most according to claim 3 prevents the method for overload, it is characterised in that determine reality according to described handling capacity Border concurrent request amount window specifically includes:
When the request number of times of described concurrent request reaches preset request number of times threshold values, it is determined that described storage server is high negative Carry;
The described handling capacity currently calculated is set to presently described actual concurrent request amount window.
Server the most according to claim 3 prevents the method for overload, it is characterised in that gulp down described in currently calculating The amount of telling is set to presently described actual concurrent request amount window and specifically includes:
Constantly during described storage server sends concurrent request, the described handling capacity of described storage server is carried out Calculate in real time;
Judge that the described handling capacity calculated in real time whether more than described actual concurrent request amount window, the most then updates described Actual concurrent request amount window is the described handling capacity calculated in real time.
6. a server, it is characterised in that including:
Concurrent request transmitting element, for constantly sending concurrent request to storage server according to initial concurrent request amount window;
Computing unit, calculates the handling capacity of storage server for the bag quantity of returning returned according to described storage server, and Actual concurrent request amount window is determined according to described handling capacity;
Determine system load capacity unit, be used for using described actual concurrent request amount window size as concurrent request threshold value, directly Reach described concurrent request threshold value to the described concurrent request sent, then stop sending described concurrent request to described storage service Device.
Server the most according to claim 6, it is characterised in that server also includes:
Unit is set, is infinity for arranging the size of described initial concurrent request amount window.
Server the most according to claim 7, it is characterised in that computing unit specifically includes:
Computation subunit, is calculated deposit for the bag quantity of returning returned by preset time period and the described concurrent request that sent The handling capacity of storage server.
Server the most according to claim 8, it is characterised in that computing unit the most also includes:
High capacity determines subelement, is used for when the request number of times of described concurrent request reaches preset request number of times threshold values, the most really Fixed described storage server is high capacity;
Actual concurrent request amount determines subelement, for the described handling capacity currently calculated is set to presently described reality also Send out request amount window.
Server the most according to claim 9, it is characterised in that actual concurrent request amount determines that subelement specifically includes:
Computing module in real time, for constantly during described storage server sends concurrent request, storing service to described The described handling capacity of device calculates in real time;
Judge module, for judging whether the described handling capacity calculated in real time is more than described actual concurrent request amount window, if It is that then updating described actual concurrent request amount window is the described handling capacity calculated in real time.
11. 1 kinds of servers prevent the system of overload, it is characterised in that including:
Several store server, and the server as described in any one in claim 6 to 10;
Several storage servers are set up with described server communication connection relation.
CN201610526094.5A 2016-07-05 2016-07-05 A kind of server prevents method, server and the system of overload Active CN106210028B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610526094.5A CN106210028B (en) 2016-07-05 2016-07-05 A kind of server prevents method, server and the system of overload

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610526094.5A CN106210028B (en) 2016-07-05 2016-07-05 A kind of server prevents method, server and the system of overload

Publications (2)

Publication Number Publication Date
CN106210028A true CN106210028A (en) 2016-12-07
CN106210028B CN106210028B (en) 2019-09-06

Family

ID=57465462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610526094.5A Active CN106210028B (en) 2016-07-05 2016-07-05 A kind of server prevents method, server and the system of overload

Country Status (1)

Country Link
CN (1) CN106210028B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108255599A (en) * 2016-12-29 2018-07-06 北京京东尚科信息技术有限公司 Based on the treating method and apparatus largely asked

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101882105A (en) * 2010-06-01 2010-11-10 华南理工大学 Method for testing response time of Web page under concurrent environment
CN102148759A (en) * 2011-04-01 2011-08-10 许旭 Method for saving export bandwidth of backbone network by cache acceleration system
CN103236956A (en) * 2013-04-18 2013-08-07 神州数码网络(北京)有限公司 Method and switch for testing throughput of communication equipment
CN105207832A (en) * 2014-06-13 2015-12-30 腾讯科技(深圳)有限公司 Server stress testing method and device
CN105701207A (en) * 2016-01-12 2016-06-22 腾讯科技(深圳)有限公司 Request quantity forecast method of resource and application recommendation method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101882105A (en) * 2010-06-01 2010-11-10 华南理工大学 Method for testing response time of Web page under concurrent environment
CN102148759A (en) * 2011-04-01 2011-08-10 许旭 Method for saving export bandwidth of backbone network by cache acceleration system
CN103236956A (en) * 2013-04-18 2013-08-07 神州数码网络(北京)有限公司 Method and switch for testing throughput of communication equipment
CN105207832A (en) * 2014-06-13 2015-12-30 腾讯科技(深圳)有限公司 Server stress testing method and device
CN105701207A (en) * 2016-01-12 2016-06-22 腾讯科技(深圳)有限公司 Request quantity forecast method of resource and application recommendation method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邵燕琳: "web***性能测试工具的研究", 《中国优秀硕士学位论文全文库 信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108255599A (en) * 2016-12-29 2018-07-06 北京京东尚科信息技术有限公司 Based on the treating method and apparatus largely asked

Also Published As

Publication number Publication date
CN106210028B (en) 2019-09-06

Similar Documents

Publication Publication Date Title
CN109426949B (en) Cross-chain transaction method and device
CN109636384A (en) A kind of parallelization executes the method, apparatus and system of block chain transaction
CN101582850B (en) Method and system for realizing load balance
CN103414589B (en) A kind of method and device managing resource information
CN100525378C (en) Management method, system and device to update distributed set top box
CN111061735B (en) Capacity expansion method and device based on single-chain blockchain
CN108667903B (en) Data transmission method for uplink, device and storage medium
CN110493357A (en) A kind of calculation resource disposition method, system, device and computer storage medium
CN105657067A (en) Game verification method and device, game server and verification server
JP2019504415A (en) Data storage service processing method and apparatus
CN110958132A (en) Method for monitoring network card equipment, substrate management controller and network card equipment
CN103733184B (en) There is device programming system and the operational approach thereof of data broadcast
CN106210028A (en) A kind of server prevents method, server and the system of overload
CN108418752A (en) A kind of creation method and device of aggregation group
CN109587053A (en) Network shunt method and relevant device
CN108337328A (en) A kind of data exchange system, data uploading method and data download method
CN103634322B (en) Heartbeat management method, heartbeat management device and heartbeat management system for application programs
CN104657240B (en) The Failure Control method and device of more kernel operating systems
CN105450679A (en) Method and system for performing data cloud storage
CN106506647A (en) A kind of client has the intelligence community cloud storage system of data backup device
CN106254440A (en) The upgrade method of a kind of AP and device
CN108306926B (en) Method and device for pushing gateway service data of Internet of vehicles equipment
CN105519055A (en) Dynamic equilibrium method and apparatus for QoS of I/O channel
CN107846429A (en) A kind of file backup method, device and system
CN107046503A (en) A kind of message transmitting method, system and its apparatus

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210111

Address after: 510000 3108, 79 Wanbo 2nd Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Patentee after: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 511449 Wanda Plaza, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Patentee before: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20161207

Assignee: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

Assignor: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

Contract record no.: X2021440000053

Denomination of invention: Method, server and system for preventing overload of server

Granted publication date: 20190906

License type: Common License

Record date: 20210208

EE01 Entry into force of recordation of patent licensing contract