CN104363300A - Compute task distributed dispatching device in server cluster - Google Patents

Compute task distributed dispatching device in server cluster Download PDF

Info

Publication number
CN104363300A
CN104363300A CN201410690581.6A CN201410690581A CN104363300A CN 104363300 A CN104363300 A CN 104363300A CN 201410690581 A CN201410690581 A CN 201410690581A CN 104363300 A CN104363300 A CN 104363300A
Authority
CN
China
Prior art keywords
processing server
processing
server
module
calculation task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410690581.6A
Other languages
Chinese (zh)
Other versions
CN104363300B (en
Inventor
成浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201410690581.6A priority Critical patent/CN104363300B/en
Publication of CN104363300A publication Critical patent/CN104363300A/en
Application granted granted Critical
Publication of CN104363300B publication Critical patent/CN104363300B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1073Registration or de-registration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention discloses a compute task distributed dispatching device in a server cluster. Compute tasks from front-end devices are dispatched to processing servers in the server cluster for carrying out compute processing. The compute task distributed dispatching device in the server cluster comprises a register management module, a distributing module and a coordinating module, wherein the register management module is used for receiving register of the processing servers and receiving computing resource information and load information reported by the processing servers, the distributing module is used for distributing the processing servers bound to the front-end devices and issuing the bound processing servers to the front-end devices, and the coordinating module is used for receiving assistance requests sent by the registered processing servers when the processing servers are overloaded and coordinating other processing servers with residual processing capacity to assist the corresponding processing servers to complete the compute tasks contained in the assistance requests according to the processing capacity and the load information of the other processing servers. According to the compute task distributed dispatching device in the server cluster, the work pressure of a management server is reduced, automatic adjustment of overloaded processing servers is achieved automatically, and the load balance of the server cluster is guaranteed.

Description

Task distribution formula dispatching device is calculated in a kind of server cluster
Technical field
The invention belongs to data service technology field, particularly relate in a kind of server cluster and calculate task distribution formula dispatching device.
Background technology
In the Intelligent traffic management systems of City-level, car picture execution analysis process is crossed to the magnanimity that the headend equipments such as a large amount of high definition bayonet socket, electronic police produce, becomes more and more urgent demand.Carrying out intellectual analysis to crossing car picture, extracting the structured message such as car plate, vehicle, Che Biao, body color that each crosses car, conveniently can carry out interlock to suspicion of crime car on the one hand and deploy to ensure effective monitoring and control of illegal activities, catch in real time.Meanwhile, Surveillance center carries out depth analysis to the car information crossed that picture analyzing produces, and can realize deck analysis, Vehicle tracing, market demand business with transportation industries such as car correlation analyses.
For large-scale Intelligent traffic management systems, because the car picture total amount of crossing of generation per second has reached thousands of even up to ten thousand, therefore single equipment node cannot complete the calculation task of magnanimity bayonet socket picture, must dispose more computing equipment and process picture.When a large amount of deployment computing equipment, if reasonably mass picture can not be assigned to each equipment to process, cause the waste of device resource on the one hand, the a large amount of calculation tasks simultaneously causing picture processing to relate to cannot complete in time, and impact utilizes the response speed of supervisory control system process social security, accident.
In prior art, a kind of better simply allocation strategy is: the leading subscriber of Intelligent traffic management systems passes through the administration configuration interface of Intelligent traffic management systems, according to the strategy of load balancing, by the bayonet socket of basic equal number, configuration is tied to each computing equipment.Such as: the Intelligent traffic management systems of 1000 decometer mouths and 50 processing server compositions, binding configuration is carried out according to the picture of every platform processing server process 20 decometer mouth.That is: the 1 to 20 decometer mouth, is configured to processing server 1; 21 to 40 decometer mouth, is configured to processing server 2, by that analogy.Above-mentioned collocation strategy will be issued to bayonet socket, bayonet socket generated car picture and by picture-storage to central storage device after, to the processing server of correspondence, picture processing request message can be sent, in message, carries the URL routing information of picture.After processing server receives request message, first read picture file data based on picture indices information from central storage device, then the process of Car license recognition scheduling algorithm is carried out to picture file.
Such as, but cross vehicle flowrate due to bayonet socket and have obvious region, timeliness, the vehicle flowrate of crossing of the bayonet socket of different location, city has bigger difference, daytime, peak period on and off duty also had bigger difference with the mistake vehicle flowrate in the late into the night.Even the vehicle flowrate excessively of same bayonet socket, also can affect by various factors, and cause larger change occurs simultaneously.Such as: bad weather, road maintenance, the change of hot spot region, city, urban fringe expansion etc., all cause the bayonet socket of some road to cross vehicle flowrate and larger change occurs.Just because of these factors, in practical application, bayonet socket crosses the generation quantity of car picture is dynamic change, and picture amount of calculation is also dynamic change, cannot accomplish accurately to estimate when configuring.
Therefore, this disposable static task scheduling mode bayonet socket configuration being tied to image processing device, scheme flexibility is poor, can not to make a slip of the tongue vehicle flowrate according to actual card, the computing equipment of dynamic conditioning picture processing, probably cause that some computing equipment is busy, some computing equipment is not busy, the computational resource of whole cluster can not be made full use of.
The another kind of technical scheme of prior art not is bayonet socket configuration binding processing server, and the processing server of all bayonet socket pictures, all carries out dynamic assignment by the management node of cluster.The all processing servers of cluster by the unified management of cluster management node, the information such as the latest computed resource situation (such as CPU usage, free memory etc.) of all processing servers of management node periodic collection, equipment state (normal work, malfunction).Bayonet socket produce one opened car picture and by picture write storage device after, send picture processing request message to management node, carry the URL path of picture file in message, ask to process this picture.Management node, based on resource, the state information of collected computing equipment, according to load balancing principle, selects the processing server that current computational resource is the most sufficient.After selection completes, picture processing request message is forwarded to corresponding processing server.After processing server receives request message, first read picture file based on picture URL address from memory device, then the process of Car license recognition scheduling algorithm is carried out to picture file.
But the shortcoming of this technical scheme is also apparent, each pictures of all bayonet sockets, before processing server process, all will converge to the process that management node carries out allocation process server.For large-scale intelligent traffic system, deploy high definition bayonet socket, the electronic police equipment of several thousand even up to ten thousand, the car picture total amount of crossing per second between the peak period reaches up to ten thousand.The process request of each pictures all needs to converge to management node and carries out allocation process, is easy to cause management node to become the process bottleneck of whole system.Meanwhile, city high definition bayonet system scale is also in continuous expansion, and city vehicle quantity also will rise steadily, and the performance bottleneck restriction of the managed node of this scheme, extensibility is lower.And if the way of employing expansion management number of nodes solves this problem, such as management node is expanded to n platform from 1, then how to carry out between multiple management node cluster information synchronously, how to coordinate the scheduling strategy of multiple stage management node, become very scabrous technological difficulties.
Therefore, how rational each equipment bayonet socket picture being distributed to cluster processes, and makes full use of the computational resource of each equipment, is one of Intelligent traffic management systems key issue that must solve.
Summary of the invention
The invention provides in a kind of server process cluster and calculate task distribution formula dispatching device, adopt the mechanism that static binding and dynamic conditioning combine, both avoided while management server becomes the process bottleneck of whole system, when turn avoid static binding, processing server Processing tasks is unbalanced, cannot make full use of the problem of PC cluster resource.
To achieve these goals, technical solution of the present invention is as follows:
Task distribution formula dispatching device is calculated in a kind of server cluster, be applied to the management server in described server cluster, the calculation task coming from headend equipment is dispatched in the processing server in described server cluster and carries out computing, described dispatching device comprises registration management module, distribution module and Coordination module, wherein:
Registration management module, for accepting the registration of processing server, and receives the computational resource information and load information that processing server reports;
Distribution module, for estimating out the disposal ability of the processing server registered according to the computational resource information of registered processing server, and estimating the calculation task quantity that headend equipment produces, for headend equipment distributes the processing server of binding, and the processing server of binding is issued to headend equipment;
Coordination module, the assistance request of sending during for receiving registered processing server overload, according to disposal ability and the load information of other processing servers, the calculation task that other processing servers coordinating to have rest processing capacity have assisted this assistance request to comprise.
Further, described Coordination module is also coordinated successfully or failed response message for returning to the processing server sending assistance request.
In the present invention, described load information comprises the input maximum rate of disposal ability, total input maximum rate, each headend equipment, described registration management module is also for receiving the load information that processing server reports, and the assistance number of requests sent in conjunction with this processing server and the headend equipment information of binding, form load and disposal ability table.
Further, described distribution module is also according to described load and the disposal ability table of registration management module generation, search the processing server that rest processing capacity is maximum, the headend equipment newly increased is tied to the maximum processing server of found rest processing capacity; Described distribution module is also according to described load and the disposal ability table of registration management module generation, according to the assistance number of requests that processing server sends, judge the overload processing server that need carry out adjusting, the headend equipment this overload processing server exceeding its disposal ability is tied to other and has on the processing server of rest processing capacity.
By setting up load and disposal ability table, further increasing the dynamic adjustment capability of server cluster, for processing server that is newly-increased and overload, carrying out new binding.
The invention allows in a kind of server cluster and calculate task distribution formula dispatching device, be applied to the processing server in described server cluster, the calculation task coming from headend equipment is scheduled in described processing server and carries out computing, described server cluster also comprises management server, described dispatching device comprises registration and resource reporting module, calculation task processing module and request assistance module, wherein:
Registration and resource reporting module, for registering to management server, the computational resource information of taken at regular intervals self, load information, and be reported to management server;
Calculation task processing module, is cached to request queue for the calculation task that receives from headend equipment, carries out computing successively to the calculation task in request queue;
Request assistance module, according to the request queue of calculation task processing module, when the length of request queue exceedes the threshold value of regulation, send assist request to management server, request of assisting comprises in request queue the calculation task come after this threshold value.
Further, described request assistance module, also for after receiving the successful response message of coordination that described management server returns, notifies that described calculation task processing module deletes the calculation task of assisting entrained by request in request queue; After the response message receiving the hormany failure that described management server returns, notify that described calculation task processing module keeps assisting the position of calculation task in request queue entrained by request, continue to process successively.
Calculation task from headend equipment of the present invention comprises memory location index information corresponding to calculation task data, described calculation task processing module is when computing, first read calculation task data according to described memory location index information, then computing is carried out to it.With the storage index information that calculation task data are corresponding in calculation task, instead of calculation task data itself, headend equipment sends size from data to processing server can be reduced, processing server is only when carrying out task computation, just transfer calculation task data, reduce Internet Transmission burden.
Task distribution formula dispatching device is calculated in a kind of server process cluster that the present invention proposes, by static binding strategy by the processing server of first for headend equipment static binding to correspondence, then according to the calculation task load information of processing server, by dynamic adjustment mechanism, coordinate other processing server assist process.Dispatching device of the present invention reduces the operating pressure of management server, also automatically realizes the automatic adjustment of processing server overload, has ensured the load balancing of server cluster.
Accompanying drawing explanation
Fig. 1 is embodiment of the present invention Intelligent traffic management systems networking schematic diagram;
Fig. 2 is dispatching device embodiment one structural representation of the present invention;
Fig. 3 is dispatching device embodiment two structural representation of the present invention.
Embodiment
Be described in further details technical solution of the present invention below in conjunction with drawings and Examples, following examples do not form limitation of the invention.
For the calculation task of magnanimity from headend equipment, server cluster is often adopted to complete the process of calculation task in a distributed manner.The present embodiment is for Intelligent traffic management systems as shown in Figure 1, and concrete elaboration in a kind of server cluster calculates task distribution formula dispatching device.Intelligent traffic management systems comprises headend equipment, memory device, and management server and at least one processing server, and each equipment is connected by wide area network.Multiple processing server composition server cluster, be responsible for the calculation task of process from headend equipment, management server is in charge of each processing server in coordination server cluster.Management server is usually located at the administrative center of Intelligent traffic management systems, memory device and processing server can be concentrated and be placed on administrative center, also the administrative center of each stub area of Intelligent traffic management systems can be distributed in, or put together with headend equipment, the invention is not restricted to concrete network construction form.
Headend equipment is positioned at each bayonet socket of Intelligent traffic management systems or the crossing of electric police grasp shoot, generally includes video camera, encoder etc., and all devices of a bayonet socket is considered as a headend equipment by the present embodiment.Headend equipment is used for capturing the video pictures of car, the picture of candid photograph is kept at memory device, to the request of processing server transmission processing.In Intelligent traffic management systems, the process request from headend equipment will carry out the calculation task of distributed scheduling exactly.
Embodiment one, calculates task distribution formula dispatching device, is applied to management server in a kind of server cluster.This device comprises registration management module, distribution module and Coordination module, specifically describes respectively below.
Registration management module, for accepting the registration of processing server, and receives the computational resource information and load information that processing server reports.
For Intelligent traffic management systems, processing server is registered to the registration management module of management server when starting, and sends keep-alive information keep-alive, and multiple processing server is registered to management server becomes server cluster.Processing server regularly reports oneself computational resource information and load information to the registration management module of management server.Thus management server is provided with the IP address of each processing server; and known the state whether online or information off-line of processing server by keep-alive message, and report by what regularly receive processing server the computational resource information and load information of knowing processing server.Wherein the computational resource information of processing server refers to CPU and the memory information of this processing server, also may further include the occupancy rate information of CPU and internal memory.And load information comprises the input maximum rate of disposal ability, total input maximum rate, each headend equipment.
The wherein disposal ability of processing server, when processing server does not occur that calculation task overstocks, the disposal ability of processing server is exactly the max calculation task quantity of process per second.Namely when the length calculating task request queue in buffer memory is no more than the threshold value of regulation, the max calculation task quantity of processing server process per second.And when processing server occurs that calculation task overstocks, when the length namely calculating task request queue in buffer memory exceedes the threshold value of regulation, disposal ability i.e. its maximum processing capability of processing server.Total input maximum rate refers to that all headend equipments that processing server is bound are input to the summation of the maximum input rate of the calculation task of processing server, i.e. the processing server total quantity receiving calculation task from each binding headend equipment per second.The input maximum rate of each headend equipment refers to that each headend equipment that processing server is bound is input to the maximum input rate of the calculation task of processing server, i.e. the processing server quantity receiving calculation task from each binding headend equipment per second.
Distribution module, disposal ability is estimated for what estimate out the processing server registered according to the computational resource information of registered processing server, and estimating the calculation task quantity that headend equipment produces, for headend equipment distributes the processing server of binding, and the processing server of binding is issued to headend equipment.
In the present embodiment, the vehicle of headend equipment monitoring process, captures video pictures, video pictures is stored in memory device, and to bound processing server transmission processing request.Therefore to carry out the calculation task of distributed scheduling from the process request of headend equipment exactly, first need the location according to this headend equipment place and historical data, estimate the vehicle flow through this location, namely corresponding process number of requests.Further can according to the computational resource information of registered processing server, what estimate registered processing server estimates disposal ability, i.e. manageable calculation task quantity per second.Thus based on the principle of load balancing, be the processing server that headend equipment distributes binding, at the processing server that configuration file specifies this headend equipment to bind during distribution.It should be noted that, the present embodiment is at the load balancing for adopting when headend equipment distributes the processing server of binding, it is the calculation task quantity relative equilibrium made handled by each processing server, the calculation task of the processing server process that disposal ability is larger is more, each processing server CPU operationally and memory usage relative equilibrium, thus avoided a processing server extremely busy, the situation that some processing server is very idle.
Such as, 2 processing servers, 30 headend equipments are had in Intelligent traffic management systems.The computational resource information of processing server 1 is: Intel E32 core CPU, 4G internal memory, estimates and can process 300 pictures/second; The computational resource information of processing server 2 is: Intel I74 core CPU, 8G internal memory, estimates and can process 500 pictures/second.The process number of requests estimating headend equipment 1-12 total is no more than 300 pictures/second, and the total process petition of headend equipment 13-30 is no more than 500 pictures/second.Then headend equipment 1-12 is distributed to processing server 1, headend equipment 13-30 is distributed to processing server 2.
Due to headend equipment reach the standard grade after can be registered to the video management server of Intelligent traffic management systems, distribution module can obtain the information of headend equipment from the video management server of Intelligent traffic management systems, so that management server issues the information of bound processing server to it, thus when headend equipment produces calculation task, to bound processing server transmission processing request.Distribution module knows that the method for headend equipment information is not limited to transfer from video management server, also can capture from database, or directly configures in administrative center.
Particularly in Intelligent traffic management systems, the picture indices information of institute's store video picture is carried in the process request that headend equipment sends, request processing server identifies video pictures, identify the structured message of video pictures, such as Car license recognition, vehicle-logo recognition, vehicle cab recognition etc.Here, after picture indices information refers to that video pictures that headend equipment is captured is stored into memory device, the corresponding memory location index information in memory device, generally represents with URL path.
Thus after headend equipment registration is reached the standard grade, distribution module reads the processing server information that this headend equipment is bound from configuration file, is issued to this headend equipment by configuration messages.After headend equipment receives bound processing server information, just the video pictures of candid photograph can be stored in memory device, and to bound processing server transmission processing request, send to processing server by calculation task.
Coordination module, the assistance request of sending during for receiving registered processing server overload, according to disposal ability and the load information of other processing servers, the calculation task that other processing servers coordinating to have rest processing capacity have assisted this assistance request to comprise.
In the present embodiment, the process request message that processing server receiving front-end equipment is sent, picture indices information in dissection process request message, and from memory device, read corresponding video pictures according to URL path, carry out picture recognition, extract the structured message in picture, such as Car license recognition, vehicle-logo recognition, vehicle cab recognition etc.
But owing to crossing vehicle flowrate by bad weather, road maintenance, the change of hot spot region, city, urban fringe expansion etc., capital causes the vehicle flowrate of crossing of some road that larger change occurs, when these conditions occur, the process number of requests that headend equipment sends can be undergone mutation, and the excessive process of processing server load of its binding is not come.When processing server load is excessive, pending process request message is too much, cause request message queue to occur to a certain degree overstocked time, processing server sends to management server and assists request.After the Coordination module of management server receives the assistance request that processing server sends, according to load, the ability of each processing server (except the processing server that request is assisted), select the processing server that load is the lightest, surplus capacity is maximum, the picture indices information needing assist process of carrying in being asked according to assistance by the processing server selected, reads video pictures and carries out picture recognition process from memory device.
Particularly, Coordination module compares the load information of other processing servers (except sending the processing server of this assistance request), the calculation task finding treatable calculation task quantity per second in server cluster to be greater than self input adds the processing server of the quantity of assisting the calculation task comprised in request, and the processing server that the calculation task received per second is minimum, process the calculation task in this assistance request.If found, the calculation task of assisting in request being issued found processing server, assisting the processing server of request to return the successful response message of coordination to sending, otherwise to sending the response message of assisting the processing server of request to return hormany failure.
Embodiment two, calculates task distribution formula dispatching device, is applied to processing server in a kind of server cluster.This device comprises:
Registration and resource reporting module, for registering to management server, the computational resource information of taken at regular intervals self, load information, and be reported to management server.
For Intelligent traffic management systems, the registration of the processing server in server cluster and resource reporting module are registered to management server when starting, and send keep-alive information keep-alive.And computational resource information, the load information of taken at regular intervals self, be reported to management server.Thus management server can grasp computational resource information and the load information of each processing server, so that after the coordination request receiving other processing servers, the load information according to each processing server is coordinated.
Calculation task processing module, receives the calculation task from headend equipment and is cached to request queue, carries out computing successively to the calculation task in request queue.
In the present embodiment, first the calculation task from headend equipment is sorted in the buffer, form request queue, to judge whether processing server transships in the follow-up length according to request queue.
Calculation task processing module, in concrete process, first needs to parse the picture indices information comprised in process request, and reads corresponding image data according to this picture indices information, carry out picture recognition.Namely parse the URL path of the video pictures carried in this process request, and from memory device, read image data according to this URL path, carry out picture recognition.
Request assistance module, according to the request queue of calculation task processing module, when the length of request queue exceedes the threshold value of regulation, send assist request to management server, request of assisting comprises in request queue the calculation task come after this threshold value.
When the request queue of calculation task exceedes certain threshold value, such as 128, then the calculation task (such as 32) exceeding this threshold value (128) is copied to and assist, in request, to send to management server.
Request assistance module is also according to the response message from management server received, and notice calculation task processing module does following process:
If receive successful response message, then in request queue, delete the calculation task of assisting entrained by request;
If receive the response message of failure, then keep assisting the position of calculation task in request queue entrained by request, continue to process successively.
In the present embodiment, because the quantity of the calculation task of each headend equipment may in continuous variation, when particularly there is sustainable growth, the processing server of its binding may continue to be in overload conditions, at this time needs to carry out dynamic conditioning to the binding relationship of processing server and headend equipment.Or when there being newly-increased headend equipment, need for newly-increased headend equipment binding processing server.These situations all need to be grasped the load information of each processing server, processing server needs regularly (such as per hour) to collect the load information of self for this reason, wherein the input maximum rate of the calculation task disposal ability of processing server, total input maximum rate and each headend equipment is in units of calculation task quantity/second, be exactly process number of requests/second in the present embodiment, the picture number/second that also namely will identify.Statistics in this cycle, as per hour, is reported to management server by processing server interval certain cycle.
Management server, after receiving these data that processing server reports, forms load and the disposal ability table of each processing server, as shown in table 1.Also record the number of times of the assistance request that each processing server sends in Table 1 simultaneously.
Table 1
After the registration management module of the present embodiment receives the load information that processing server reports, the assistance number of requests sent in conjunction with processing server and the headend equipment information of binding, the load of formation processing server and disposal ability table.Distribution module knows the data of table 1 from registration management module, and to newly-increased headend equipment, and the processing server of overload carries out dynamic conditioning, describes adjustment process respectively below.
Headend equipment for newly-increased: when newly increasing headend equipment in Intelligent traffic management systems, newly-increased headend equipment, based on table 1, is tied to the maximum processing server of rest processing capacity by the distribution module of management server automatically.Such as table 1, processing server 3 has rest processing capacity, then newly-increased headend equipment is tied to processing server 3.
It should be noted that, the rest processing capacity that the present embodiment is mentioned, refer to the maximum processing capability of processing server and the difference of total input maximum rate.After newly increasing headend equipment, if each processing server all load is higher, such as all at full capacity operationally, there is no rest processing capacity, or rest processing capacity processes the calculation task estimated of newly-increased headend equipment not enough, then distribution module needs to send warning information, notifies that administrative staff increase new processing server.
Processing server for overload: distribution module is according to the data of table 1, and make dynamic adjustment to the binding relationship of headend equipment and processing server, adjustable strategies comprises two kinds:
The first, once judge every some cycles (such as 1 month), for not sending the processing server of assisting request, illustrates that the calculation task of this processing server is all within the scope of its normal disposal ability, does not need to adjust; For having the processing server of assisting request, illustrating that its calculation task exceedes its disposal ability, needing to trigger adjustment process.
Another kind is when the assistance number of requests of all processing servers exceedes certain threshold value (such as 10 times) triggering adjustment process.
Particularly, for sending the processing server of assisting request in adjustment process, this processing server is exceeded the part headend equipment of its disposal ability, distributing to other disposal abilities has remaining processing server.
Wherein, for the processing server of new binding, the maximum input rate of calculation task of the headend equipment of new binding and the existing maximum input rate sum always of this processing server, be no more than the maximum processing capability of this processing server.When processing server does not also occur assisting request, its maximum processing capability cannot be assessed, therefore in the present embodiment when assessing, the maximum capacity of processing server is equal to default value, namely occur assist request processing server maximum capacity value in minimum value.
Below for table 1, be specifically described:
Because processing server 1 occurs assisting request, therefore need to adjust processing server 1.In Table 1, processing server 3 is that calculation task inputs the lightest processing server, and after headend equipment 3 is tied to processing server 3, the calculation task input maximum rate of processing server 3 can not exceed its maximum processing capability (320/ second).Therefore headend equipment 3 is tied to processing server 3, simultaneously updating form 1.
After the adjustment, headend equipment 3 is tied to the information of processing server 3 by management server, is issued to headend equipment 3 by configuration messages, and after headend equipment 3 receives, follow-up process request will be dealt into processing server 3 and process.Load after renewal and disposal ability table as shown in table 2:
Table 2
Therefore distribution module, first according to the assistance number of requests that processing server sends, judge the overload processing server that need carry out adjusting, then the headend equipment this overload processing server exceeding its disposal ability is tied to other and has on the processing server of rest processing capacity.
Equally, if when the rest processing capacity of other processing server processes not and overload processing server exceeds the maximum input rate of the headend equipment of its disposal ability, then distribution module needs to send warning information, notifies that administrative staff increase new processing server.
Above embodiment is only in order to illustrate technical scheme of the present invention but not to be limited; when not deviating from the present invention's spirit and essence thereof; those of ordinary skill in the art are when making various corresponding change and distortion according to the present invention, but these change accordingly and are out of shape the protection range that all should belong to the claim appended by the present invention.

Claims (9)

1. calculate task distribution formula dispatching device in a server cluster, be applied to the management server in described server cluster, the calculation task coming from headend equipment is dispatched in the processing server in described server cluster and carries out computing, it is characterized in that, described dispatching device comprises registration management module, distribution module and Coordination module, wherein:
Registration management module, for accepting the registration of processing server, and receives the computational resource information and load information that processing server reports;
Distribution module, for estimating out the disposal ability of the processing server registered according to the computational resource information of registered processing server, and estimating the calculation task quantity that headend equipment produces, for headend equipment distributes the processing server of binding, and the processing server of binding is issued to headend equipment;
Coordination module, the assistance request of sending during for receiving registered processing server overload, according to disposal ability and the load information of other processing servers, the calculation task that other processing servers coordinating to have rest processing capacity have assisted this assistance request to comprise.
2. dispatching device according to claim 1, is characterized in that, described Coordination module is also coordinated successfully or failed response message for returning to the processing server sending assistance request.
3. dispatching device according to claim 1, it is characterized in that, described load information comprises the input maximum rate of disposal ability, total input maximum rate, each headend equipment, described registration management module is also for receiving the load information that processing server reports, and the assistance number of requests sent in conjunction with this processing server and the headend equipment information of binding, form load and disposal ability table.
4. dispatching device according to claim 3, it is characterized in that, described distribution module is also according to described load and the disposal ability table of registration management module generation, search the processing server that rest processing capacity is maximum, the headend equipment newly increased is tied to the maximum processing server of found rest processing capacity.
5. dispatching device according to claim 3, it is characterized in that, described distribution module is also according to described load and the disposal ability table of registration management module generation, according to the assistance number of requests that processing server sends, judge the overload processing server that need carry out adjusting, the headend equipment this overload processing server exceeding its disposal ability is tied to other and has on the processing server of rest processing capacity.
6. calculate task distribution formula dispatching device in a server cluster, be applied to the processing server in described server cluster, the calculation task coming from headend equipment is scheduled in described processing server and carries out computing, it is characterized in that, described server cluster also comprises management server, described dispatching device comprises registration and resource reporting module, calculation task processing module and request assistance module, wherein:
Registration and resource reporting module, for registering to management server, the computational resource information of taken at regular intervals self, load information, and be reported to management server;
Calculation task processing module, is cached to request queue for the calculation task that receives from headend equipment, carries out computing successively to the calculation task in request queue;
Request assistance module, according to the request queue of calculation task processing module, when the length of request queue exceedes the threshold value of regulation, send assist request to management server, request of assisting comprises in request queue the calculation task come after this threshold value.
7. dispatching device according to claim 6, it is characterized in that, described request assistance module, also for after receiving the successful response message of coordination that described management server returns, notifies that described calculation task processing module deletes the calculation task of assisting entrained by request in request queue; After the response message receiving the hormany failure that described management server returns, notify that described calculation task processing module keeps assisting the position of calculation task in request queue entrained by request, continue to process successively.
8. dispatching device according to claim 6, is characterized in that, described load information comprises the input maximum rate of disposal ability, total input maximum rate, each headend equipment.
9. dispatching device according to claim 6, it is characterized in that, the described calculation task from headend equipment comprises memory location index information corresponding to calculation task data, described calculation task processing module is when computing, first read calculation task data according to described memory location index information, then computing is carried out to it.
CN201410690581.6A 2014-11-26 2014-11-26 Task distribution formula dispatching device is calculated in a kind of server cluster Active CN104363300B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410690581.6A CN104363300B (en) 2014-11-26 2014-11-26 Task distribution formula dispatching device is calculated in a kind of server cluster

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410690581.6A CN104363300B (en) 2014-11-26 2014-11-26 Task distribution formula dispatching device is calculated in a kind of server cluster

Publications (2)

Publication Number Publication Date
CN104363300A true CN104363300A (en) 2015-02-18
CN104363300B CN104363300B (en) 2018-06-05

Family

ID=52530526

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410690581.6A Active CN104363300B (en) 2014-11-26 2014-11-26 Task distribution formula dispatching device is calculated in a kind of server cluster

Country Status (1)

Country Link
CN (1) CN104363300B (en)

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104933167A (en) * 2015-06-30 2015-09-23 深圳走天下科技有限公司 Picture processing system, device and method
CN105007336A (en) * 2015-08-14 2015-10-28 深圳市云舒网络技术有限公司 Load balancing method and load balancing system of server
WO2016197839A2 (en) * 2015-06-08 2016-12-15 阿里巴巴集团控股有限公司 Information processing method and device
CN106302734A (en) * 2016-08-16 2017-01-04 北京控制工程研究所 A kind of implementation method that independently evolution of satellite counting system
CN106557310A (en) * 2015-09-30 2017-04-05 北京奇虎科技有限公司 A kind of remote desktop management method and system
CN106559467A (en) * 2015-09-30 2017-04-05 北京奇虎科技有限公司 A kind of remote desktop management method and system
CN106603695A (en) * 2016-12-28 2017-04-26 北京奇艺世纪科技有限公司 Method and device for adjusting query-per-second value
CN106657191A (en) * 2015-11-02 2017-05-10 杭州华为企业通信技术有限公司 Load balancing method and associated apparatus and system
CN107622117A (en) * 2017-09-15 2018-01-23 广东欧珀移动通信有限公司 Image processing method and device, computer equipment, computer-readable recording medium
WO2018059423A1 (en) * 2016-09-30 2018-04-05 腾讯科技(深圳)有限公司 Distributed resource scheduling method, scheduling node, and access node
WO2018195899A1 (en) * 2017-04-28 2018-11-01 Beijing Didi Infinity Technology And Development Co., Ltd. System and method for task scheduling and device management
CN108776934A (en) * 2018-05-15 2018-11-09 中国平安人寿保险股份有限公司 Distributed data computational methods, device, computer equipment and readable storage medium storing program for executing
CN109194976A (en) * 2018-10-22 2019-01-11 网宿科技股份有限公司 Video processing, dissemination method, storage management, Content Management Platform and system
CN109800204A (en) * 2018-12-27 2019-05-24 深圳云天励飞技术有限公司 Data distributing method and Related product
CN109873858A (en) * 2018-12-27 2019-06-11 中科曙光南京研究院有限公司 A kind of business datum distributed monitoring method and distributed monitoring cluster
CN110062038A (en) * 2019-04-09 2019-07-26 网宿科技股份有限公司 A kind of data transmission scheduling method and system
CN110417831A (en) * 2018-04-27 2019-11-05 杭州海康威视数字技术股份有限公司 Smart machine computational resource allocation method, apparatus and system
CN110659180A (en) * 2019-09-05 2020-01-07 国家计算机网络与信息安全管理中心 Data center infrastructure management system based on cluster technology
CN110830759A (en) * 2018-08-09 2020-02-21 华为技术有限公司 Intelligent application deployment method, device and system
CN110889569A (en) * 2018-09-10 2020-03-17 杭州萤石软件有限公司 Computing power allocation method and device
WO2020057178A1 (en) * 2018-09-21 2020-03-26 华为技术有限公司 Method and apparatus for adjusting resources of intelligent analysis devices
CN111106971A (en) * 2019-12-31 2020-05-05 深圳市九洲电器有限公司 Device registration management method, device and computer-readable storage medium
CN111447113A (en) * 2020-03-25 2020-07-24 中国建设银行股份有限公司 System monitoring method and device
CN111885350A (en) * 2020-06-10 2020-11-03 北京旷视科技有限公司 Image processing method, system, server and storage medium
CN111902808A (en) * 2018-01-22 2020-11-06 株式会社特瑞尔 Information processing apparatus, information processing method, program, and information processing system
CN112218251A (en) * 2019-07-09 2021-01-12 普天信息技术有限公司 Method and device for processing broadband cluster concurrent service
CN112383585A (en) * 2020-10-12 2021-02-19 广州市百果园网络科技有限公司 Message processing system and method and electronic equipment
WO2021031767A1 (en) * 2019-08-22 2021-02-25 中兴通讯股份有限公司 Resource scheduling method, smart front-end device, smart gateway, and distributed system
CN112954264A (en) * 2019-12-10 2021-06-11 浙江宇视科技有限公司 Platform backup protection method and device
CN113055480A (en) * 2021-03-17 2021-06-29 网宿科技股份有限公司 Scheduling method and device
CN113556372A (en) * 2020-04-26 2021-10-26 浙江宇视科技有限公司 Data transmission method, device, equipment and storage medium
CN113687947A (en) * 2021-08-25 2021-11-23 京东方科技集团股份有限公司 Edge box optimization method and device, storage medium and electronic equipment
CN113992493A (en) * 2020-07-08 2022-01-28 阿里巴巴集团控股有限公司 Video processing method, system, device and storage medium
CN114070728A (en) * 2021-11-12 2022-02-18 上海华信长安网络科技有限公司 Method and device for grading telephone set configuration
CN116389502A (en) * 2023-02-28 2023-07-04 港珠澳大桥管理局 Cross-cluster scheduling system, method, device, computer equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1315018A (en) * 1997-07-29 2001-09-26 凯萨罗恩产品公司 Computerized system and method for optimally controlling storage and transfer of computer programs on a computer network
US20020103886A1 (en) * 2000-12-04 2002-08-01 International Business Machines Corporation Non-local aggregation of system management data
CN1738244A (en) * 2004-08-17 2006-02-22 北京亿阳巨龙智能网技术有限公司 Method for setting application server by proxy server in soft switching system
US20060059251A1 (en) * 2002-05-01 2006-03-16 Cunha Gary D Method and system for request management processing
CN1863202A (en) * 2005-10-18 2006-11-15 华为技术有限公司 Method for improving load balance apparatus and servicer processing performance
CN1873613A (en) * 2005-05-30 2006-12-06 英业达股份有限公司 Load balanced system and method of preloading files
CN101534244A (en) * 2009-02-09 2009-09-16 华为技术有限公司 Method, device and system for load distribution

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1315018A (en) * 1997-07-29 2001-09-26 凯萨罗恩产品公司 Computerized system and method for optimally controlling storage and transfer of computer programs on a computer network
US20020103886A1 (en) * 2000-12-04 2002-08-01 International Business Machines Corporation Non-local aggregation of system management data
US20060059251A1 (en) * 2002-05-01 2006-03-16 Cunha Gary D Method and system for request management processing
CN1738244A (en) * 2004-08-17 2006-02-22 北京亿阳巨龙智能网技术有限公司 Method for setting application server by proxy server in soft switching system
CN1873613A (en) * 2005-05-30 2006-12-06 英业达股份有限公司 Load balanced system and method of preloading files
CN1863202A (en) * 2005-10-18 2006-11-15 华为技术有限公司 Method for improving load balance apparatus and servicer processing performance
CN101534244A (en) * 2009-02-09 2009-09-16 华为技术有限公司 Method, device and system for load distribution

Cited By (60)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016197839A3 (en) * 2015-06-08 2017-02-09 阿里巴巴集团控股有限公司 Information processing method and device
US10491707B2 (en) 2015-06-08 2019-11-26 Alibaba Group Holding Limited Information processing using a server group
WO2016197839A2 (en) * 2015-06-08 2016-12-15 阿里巴巴集团控股有限公司 Information processing method and device
CN104933167A (en) * 2015-06-30 2015-09-23 深圳走天下科技有限公司 Picture processing system, device and method
CN105007336B (en) * 2015-08-14 2018-06-29 深圳市云舒网络技术有限公司 The load-balancing method and its system of server
CN105007336A (en) * 2015-08-14 2015-10-28 深圳市云舒网络技术有限公司 Load balancing method and load balancing system of server
CN106557310A (en) * 2015-09-30 2017-04-05 北京奇虎科技有限公司 A kind of remote desktop management method and system
CN106559467A (en) * 2015-09-30 2017-04-05 北京奇虎科技有限公司 A kind of remote desktop management method and system
CN106559467B (en) * 2015-09-30 2021-02-05 北京奇虎科技有限公司 Remote desktop management method and system
CN106657191A (en) * 2015-11-02 2017-05-10 杭州华为企业通信技术有限公司 Load balancing method and associated apparatus and system
WO2017076256A1 (en) * 2015-11-02 2017-05-11 华为技术有限公司 Load balancing method, related device and system
CN106657191B (en) * 2015-11-02 2020-10-16 杭州华为企业通信技术有限公司 Load balancing method and related device and system
US10469805B2 (en) 2015-11-02 2019-11-05 Huawei Technologies Co., Ltd. Load balancing method, and related apparatus and system
CN106302734A (en) * 2016-08-16 2017-01-04 北京控制工程研究所 A kind of implementation method that independently evolution of satellite counting system
CN106302734B (en) * 2016-08-16 2019-03-26 北京控制工程研究所 A kind of autonomous evolution implementation method of satellite counting system
WO2018059423A1 (en) * 2016-09-30 2018-04-05 腾讯科技(深圳)有限公司 Distributed resource scheduling method, scheduling node, and access node
US10838777B2 (en) 2016-09-30 2020-11-17 Tencent Technology (Shenzhen) Company Limited Distributed resource allocation method, allocation node, and access node
CN106603695A (en) * 2016-12-28 2017-04-26 北京奇艺世纪科技有限公司 Method and device for adjusting query-per-second value
US10261841B2 (en) 2017-04-28 2019-04-16 Beijing Didi Infinity Technology And Development Co., Ltd. System and method for task scheduling and device management
WO2018195899A1 (en) * 2017-04-28 2018-11-01 Beijing Didi Infinity Technology And Development Co., Ltd. System and method for task scheduling and device management
TWI685800B (en) * 2017-04-28 2020-02-21 大陸商北京嘀嘀無限科技發展有限公司 System and method for task scheduling and device management
WO2019052355A1 (en) * 2017-09-15 2019-03-21 Oppo广东移动通信有限公司 Image processing method, computer device, and computer readable storage medium
CN107622117A (en) * 2017-09-15 2018-01-23 广东欧珀移动通信有限公司 Image processing method and device, computer equipment, computer-readable recording medium
CN111902808A (en) * 2018-01-22 2020-11-06 株式会社特瑞尔 Information processing apparatus, information processing method, program, and information processing system
CN110417831B (en) * 2018-04-27 2022-07-29 杭州海康威视数字技术股份有限公司 Intelligent equipment computing resource allocation method, device and system
CN110417831A (en) * 2018-04-27 2019-11-05 杭州海康威视数字技术股份有限公司 Smart machine computational resource allocation method, apparatus and system
CN108776934A (en) * 2018-05-15 2018-11-09 中国平安人寿保险股份有限公司 Distributed data computational methods, device, computer equipment and readable storage medium storing program for executing
CN110830759B (en) * 2018-08-09 2021-09-07 华为技术有限公司 Intelligent application deployment method, device and system
CN110830759A (en) * 2018-08-09 2020-02-21 华为技术有限公司 Intelligent application deployment method, device and system
CN110889569A (en) * 2018-09-10 2020-03-17 杭州萤石软件有限公司 Computing power allocation method and device
CN110944146A (en) * 2018-09-21 2020-03-31 华为技术有限公司 Intelligent analysis equipment resource adjusting method and device
CN110944146B (en) * 2018-09-21 2022-04-12 华为技术有限公司 Intelligent analysis equipment resource adjusting method and device
US11537810B2 (en) 2018-09-21 2022-12-27 Huawei Technologies Co., Ltd. Method for adjusting resource of intelligent analysis device and apparatus
WO2020057178A1 (en) * 2018-09-21 2020-03-26 华为技术有限公司 Method and apparatus for adjusting resources of intelligent analysis devices
CN109194976A (en) * 2018-10-22 2019-01-11 网宿科技股份有限公司 Video processing, dissemination method, storage management, Content Management Platform and system
CN109873858A (en) * 2018-12-27 2019-06-11 中科曙光南京研究院有限公司 A kind of business datum distributed monitoring method and distributed monitoring cluster
CN109800204B (en) * 2018-12-27 2021-03-05 深圳云天励飞技术有限公司 Data distribution method and related product
CN109873858B (en) * 2018-12-27 2021-03-30 中科曙光南京研究院有限公司 Service data distributed monitoring method and distributed monitoring cluster
CN109800204A (en) * 2018-12-27 2019-05-24 深圳云天励飞技术有限公司 Data distributing method and Related product
CN110062038A (en) * 2019-04-09 2019-07-26 网宿科技股份有限公司 A kind of data transmission scheduling method and system
CN112218251B (en) * 2019-07-09 2022-01-07 普天信息技术有限公司 Method and device for processing broadband cluster concurrent service
CN112218251A (en) * 2019-07-09 2021-01-12 普天信息技术有限公司 Method and device for processing broadband cluster concurrent service
WO2021031767A1 (en) * 2019-08-22 2021-02-25 中兴通讯股份有限公司 Resource scheduling method, smart front-end device, smart gateway, and distributed system
CN110659180A (en) * 2019-09-05 2020-01-07 国家计算机网络与信息安全管理中心 Data center infrastructure management system based on cluster technology
CN112954264A (en) * 2019-12-10 2021-06-11 浙江宇视科技有限公司 Platform backup protection method and device
CN111106971A (en) * 2019-12-31 2020-05-05 深圳市九洲电器有限公司 Device registration management method, device and computer-readable storage medium
CN111106971B (en) * 2019-12-31 2023-04-18 深圳市九洲电器有限公司 Device registration management method, device and computer-readable storage medium
CN111447113A (en) * 2020-03-25 2020-07-24 中国建设银行股份有限公司 System monitoring method and device
CN111447113B (en) * 2020-03-25 2021-08-27 中国建设银行股份有限公司 System monitoring method and device
CN113556372A (en) * 2020-04-26 2021-10-26 浙江宇视科技有限公司 Data transmission method, device, equipment and storage medium
CN113556372B (en) * 2020-04-26 2024-02-20 浙江宇视科技有限公司 Data transmission method, device, equipment and storage medium
CN111885350A (en) * 2020-06-10 2020-11-03 北京旷视科技有限公司 Image processing method, system, server and storage medium
CN113992493A (en) * 2020-07-08 2022-01-28 阿里巴巴集团控股有限公司 Video processing method, system, device and storage medium
CN112383585A (en) * 2020-10-12 2021-02-19 广州市百果园网络科技有限公司 Message processing system and method and electronic equipment
CN113055480A (en) * 2021-03-17 2021-06-29 网宿科技股份有限公司 Scheduling method and device
CN113687947A (en) * 2021-08-25 2021-11-23 京东方科技集团股份有限公司 Edge box optimization method and device, storage medium and electronic equipment
CN114070728A (en) * 2021-11-12 2022-02-18 上海华信长安网络科技有限公司 Method and device for grading telephone set configuration
CN114070728B (en) * 2021-11-12 2024-04-09 上海华信长安网络科技有限公司 Method and device for grading configuration of telephone
CN116389502A (en) * 2023-02-28 2023-07-04 港珠澳大桥管理局 Cross-cluster scheduling system, method, device, computer equipment and storage medium
CN116389502B (en) * 2023-02-28 2024-02-23 港珠澳大桥管理局 Cross-cluster scheduling system, method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN104363300B (en) 2018-06-05

Similar Documents

Publication Publication Date Title
CN104363300A (en) Compute task distributed dispatching device in server cluster
CN106412048A (en) Information processing method and apparatus based on intelligent traffic cloud control system
CN106506673B (en) A kind of large-scale distributed data management system and its method
CN108564791B (en) Traffic information processing method and device and computing equipment
CN109067835A (en) Casualty data processing method based on block chain
CN101246646A (en) E-Police system structure based on Web service
JP2017507572A (en) Method, system, and computer-readable medium for cloud-based virtual orchestrator
CN102833352A (en) Distributed cache management system and method for implementing distributed cache management
CN102333126A (en) Streaming media on demand method based on Hadoop and virtual streaming media server cluster
CN102404126A (en) Charging method of cloud computing during application process
CN104503826A (en) Virtual machine mapping method and device for cloud computing data center
CN101420458B (en) Multimedia content monitoring system, method and device based on content distributing network
CN109802986A (en) Device management method, system, device and server
CN102209041A (en) Scheduling method, device and system
CN106131227A (en) Balancing method of loads, meta data server system and load balance system
CN105025099A (en) Smart camera network system and camera network dynamic task allocation method
CN103236168B (en) Traffic data on-line acquisition system and method
CN110889569A (en) Computing power allocation method and device
CN105118291A (en) Road traffic flow statistic comprehensive monitoring system
CN102904942A (en) Service resource control system and service resource control method
CN204904573U (en) Road traffic statistics of flow integrated monitoring system
CN207589029U (en) A kind of intelligent mobile full-view video image monitoring system
CN103781056A (en) Terminal peripheral data management method and M2M gateway
CN101989918A (en) Peer-to-peer network management system and method
CN104270466B (en) Data reporting method and relevant device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant