CN1670705A - A method for implementing centralized concurrent management to cluster - Google Patents

A method for implementing centralized concurrent management to cluster Download PDF

Info

Publication number
CN1670705A
CN1670705A CN 200410008673 CN200410008673A CN1670705A CN 1670705 A CN1670705 A CN 1670705A CN 200410008673 CN200410008673 CN 200410008673 CN 200410008673 A CN200410008673 A CN 200410008673A CN 1670705 A CN1670705 A CN 1670705A
Authority
CN
China
Prior art keywords
task
list
overall
resources
resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200410008673
Other languages
Chinese (zh)
Other versions
CN1315045C (en
Inventor
许正华
黄平
姜晓东
李电森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CNB2004100086738A priority Critical patent/CN1315045C/en
Publication of CN1670705A publication Critical patent/CN1670705A/en
Application granted granted Critical
Publication of CN1315045C publication Critical patent/CN1315045C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Multi Processors (AREA)

Abstract

This invention discloses one machine group integration management method, which comprises the following steps: the server operating body monitoring the task requirement, if the material set has non-usable materials, then abstracting the non-usable task and establishes a new task on the part and writes the part into the pre-set whole task sequence; if the task is composed of more than one independent sub-task, then the said task is divided into several sub-tasks and writes the tasks into the pre-set whole task sequence list.

Description

A kind of method to machine group integration management
Technical field
The present invention relates to the computer group network management technology, relate in particular to a kind of method machine group integration management.
Background technology
The management system that is applied in the computer network environment generally has two kinds of control strategies: based on the distributed control strategy and the centralized control strategy of distributed algorithm.Distributed control strategy is meant the node that is not specifically designed to management in the management system, the cura generalis task is all consulted to realize by all nodes, because the distributed algorithm more complicated, and efficient is lower, and therefore distributed control strategy generally only is used for small-scale management system; Centralized control strategy is meant and has special management node in the management system that this management node is not necessarily unique, may have backup, is responsible for various management and control are carried out in the operation of whole network environment.Currently in the management system of extensive cluster environment, generally all adopt the centralized control strategy.
As shown in Figure 1, in management system to machine group integration management, the general architecture that adopts client-server, management server 101 is a management node, client computer 103 is by management node, there is a run entity to be responsible for intercepting management service port one 02 on the management server 101, and listening to of task is handled.Described run entity is: process or thread.Because the processing procedure of process and thread is very similar, so this paper only is that example describes with the process.Referring to Fig. 1, all are sent services request by the client computer on the management node 103 to management port 102, handled this request and result is returned client 103 by the process in the management server 101.
In the prior art, the method for 101 pairs of machine group integration management of management server has following two kinds usually:
Prior art one: common concurrency management method.As shown in Figure 2, in this method, main control process 201 in the management server is intercepted the services request on the management service port one 02, whenever on management service port one 02, receiving a new connection request, main control process 201 is handled this connection request with regard to interim subprocess 202 of dynamic creation, after interim subprocess 202 is finished handling and the result returned to the client of the request of transmission, destroy voluntarily.This method has an obvious defects: the management of 201 pairs of interim subprocesss 202 of main control process is connection-oriented rather than oriented mission, be the number of processes that main control process 201 can't be predicted required by task, therefore only create an interim subprocess 202 for each connects.Be one if desired and connect establishment and distribute a plurality of processes that this work can only be finished by connecting pairing interim subprocess 202, efficient is not high; In addition, use this method must dynamically set up process and destruction process continually, and foundation and destruction process are a kind of system operations of very consumes resources, therefore cause serious waste of resources.
Prior art two: the concurrency management method that adopts run entity pool structure and fine granularity lock mechanism, described run entity pond is process pool or thread pool, because the processing procedure of process pool and thread pool is very similar, so this paper only is that example describes with the process pool.As shown in Figure 3, in this method, an above process in the main control process 301 Dynamic Maintenance process pools 302 in the management server.A plurality of processes in the process pool 302 are in the state of intercepting simultaneously, when certain process receives request from client, this process enters into treatment state immediately, and other process still is in the state of intercepting, and can improve the concurrent processing efficient of system so to a certain extent.Each state of a process in the management server cyclic check process pool 302, and remain the idle process of some.When causing idle process in the process pool not enough owing to task, management server is created some new idle process in trend process pool 302; When the idle process in the process pool 302 outnumbers expection, the idle process of management server auto-destruct some.This method does not need Real-time and Dynamic to set up and the destruction process, can make the process persistence, has saved resource.Simultaneously, because process pool 302 has kept the idle process of some, therefore can realize the course allocation of oriented mission at any time.
In above-mentioned concurrency management method, also have the problem of resource contention, described resource refer to management server 101 that safeguarded, be used to manage the information that shows global state whole Network of Workstation, that shared by all processes.Described resource has all kinds, for example node state information and service status information etc.Every type resource packet contains a plurality of resource examples, and all resource examples with same asset type have constituted a subset of resources.For example disposed two services in the group of planes: Web service and Mail service, the status information of the status information of Web service and Mail service is exactly two examples in the service state resource, and they belong to a kind of resource type, thereby have constituted a subset of resources.
Because the correlativity of management role is more intense, in multitask, promptly under the environment that multi-course concurrency is handled, compete same resource and the possibility that causes conflicting between process is bigger, if, will reduce the concurrent processing efficient of process pool greatly so can not solve the resource contention problem well.Therefore, the resource contention problem in the above-mentioned concurrency management method adopts the fine granularity lock mechanism to solve.
Described " fine granularity lock mechanism " is meant that creating one for each resource example locks fine granularity, and this fine granularity lock is mutual exclusion lock, and every all have a unique sign to the fine granularity lock.The fine granularity lock has two states: unlocking condition and locking state.Unblanking to mean does not currently have process " to take " this resource example separately; Lock and mean that current this resource example is taken separately by certain process.Described fine granularity lock mechanism comprises following basic operation:
1, add latching operation, promptly management server can lock to any resource example in certain subset of resources once.If certain resource example is shared by other process, promptly the pairing fine granularity lock of this resource example is in locking state, the then follow-up process that takies this resource example enters the obstruction sleep state, and when the resource example was unlocked, all blocked in the process of this resource example and will be waken up.
2, unlock operation, management server can be unblanked to any resource example in certain subset of resources once.
3, obtain the lock state, management server can obtain the pairing lock state of any resource example in certain subset of resources once.
In " fine granularity lock mechanism ", the old process of process Processing tasks is: process is obtained waiting task resource needed example once, for the resource example locks, and utilizes the resource example that obtains that task is handled.Under the situation of multi-process generation resource contention, " fine granularity lock mechanism " also can make serial processing, referring to Fig. 4, with two processes, promptly process 1 and process 2 are example, specify application " fine granularity lock mechanism " and handle the implementation method of multi-process resource contention, it is specially: process 1 is obtained needed all the resource examples of waiting task at first once before Processing tasks, and locks for all resource examples, obtain resource, and utilize the resource of obtaining that task is handled; At this moment, if the resource example that process 2 resource needed examples and process 1 are monopolized has conflict, then process 2 enters the obstruction sleep state, when process 1 handle task and to the release of whole resource example after, process 2 locks for its resource needed example again, and handles.Use " fine granularity lock mechanism ", when a plurality of processes are asked certain shared resource simultaneously, make the process of concurrent execution originally change the serial execution into, the order that process is carried out is at random, and promptly which process obtains resource earlier, and just elder generation carries out and handles.Above-mentioned fine granularity lock machine is shaped on multiple implementation, and for example available modern times, the general inter-process communication mechanisms that operating system provided (IPC) realized.
In prior art two, a lot of advantages are arranged, but also there is following defective in prior art two with respect to prior art one:
1, when multitask conflicted, treatment effeciency was low.
Process in the process pool is independent of each other when executing the task, though the process of a plurality of contention same asset examples can be carried out the serialization ordering by the fine granularity lock, but, in most of the cases conflicting task just causes owing to the contention to part resource example, be not whole resource examples of contention required by task, and even lacked part resource example, some task still can continue to handle.Therefore, use said method, in the time of must causing a task to take resource, other all have the process of resource contention all can't carry out processing with it, cause the treatment effeciency of server not high thus.
2, when one process was handled big task, treatment effeciency was low.
Described big task is meant the task-set of being made up of a series of subtasks consuming time.Above-mentioned concurrency management method be a plurality of subtasks in this big task of sequential processes in a process, and the regular hour all will be expended in each subtask, therefore, has extended the processing time greatly when handling big task, and treatment effeciency is not high.For example, for the very huge and complicated group of planes of scale, may move a plurality of services on the node, if node breaks down, then the recovery of fault is exactly a big task, the recovery of each fault may all need several seconds as a subtask or even time in tens seconds on this node, therefore, utilize prior art two described methods will inevitably cause recovery efficient low of whole fault node, again because the service number that operates on the node is uncertain, thus the time that the fault node recovers also be to be difficult to expection.
Summary of the invention
In view of this, fundamental purpose of the present invention provides a kind of method to machine group integration management, makes original serial processing mode change the parallel processing mode into, improves the efficiency of management to group of planes concurrency management.
To achieve these goals, being contemplated that substantially of the method for the invention: utilize a plurality of run entities to intercept task requests and Processing tasks simultaneously, when used resource of certain task and the used resource contention of other tasks, reduce the part of utilizing conflicted resource in this task, continuing to handle the task after reducing, is new task with reducing the task creation that falls simultaneously, makes armed statees such as this new task is in, be released up to conflicted resource, again this new task handled; When run entity listens to a big task requests, should big task resolve into a plurality of subtasks, handle simultaneously by the different run entities in the cluster management server respectively.
Technical scheme of the present invention specifically is achieved in that
A kind of method to machine group integration management, this method comprises:
The task request message that run entity in a, the cluster management server listens to from group of planes client;
B, obtain the subset of resources of described required by task, and judge whether the resource example in this subset of resources is all available, if then handle described task, process ends according to old process according to described task requests; Otherwise, reduce the part task that relates to unavailable resource example in the described task, be new task with reducing the part task creation that falls simultaneously, and described new task is write in the overall task list that sets in advance;
C, handle described task through reducing according to old process;
Idle run entity mutual exclusion in d, the cluster management server is concurrent reads above-mentioned overall task list, and reading of task is resolved, and according to analysis result, calls corresponding with it task processing module and carries out conventional processing.
Described through before the task of reducing among the described step c according to the old process processing, further comprise:
Judge whether described task through reduction is the big task that is made of independent subtask more than, if, then described task is decomposed into independent subtask more than, and described subtask is write in the predefined overall task list, and execution in step d; Otherwise, handle described task according to old process again through reducing.
Idle run entity mutual exclusion described in the steps d is concurrent reads above-mentioned overall task list, and after reading of task resolved, further comprises:
Judge whether the used resource of task that reads is all available, if then continue to carry out subsequent step; Otherwise, reduce the part task that relates to unavailable resource example in the described task, be new task with reducing the part task creation that falls simultaneously, and described new task is write in the overall task list that sets in advance, return step c.
Steps d is described according to analysis result, call corresponding with it task processing module after, further comprise:
Described task processing module judges whether to need to carry out described reading of task once more, if, then again this task is carried out conventional processing, otherwise, direct process ends.
Step b is described to judge whether all available method is the resource example in the subset of resources: obtain all available resource examples in the described subset of resources, whether the number of judging all available resources examples is identical with the number of whole resource examples in the subset of resources, if, resource example in the then described subset of resources is all available, otherwise, unavailable resource example is arranged in the described subset of resources.
The described detailed process of obtaining all available resources examples in the described subset of resources is:
A1, obtain the fine granularity lock state of all resource examples in the described subset of resources;
A2, read the fine granularity lock state of the resource example in the described subset of resources successively, and judge whether the current resource example that reads is in out fine granularity lock state, if, then obtain this resource example, and execution in step a3; Otherwise, this resource example is deleted execution in step a3 from this subset of resources;
A3, judge whether the current resource example that reads is last resource example in the subset of resources, if finish to obtain flow process; Otherwise, continue execution in step a2, read next resource example.
Described overall task list comprises: be used for depositing the task entity task list, be used to preserve the current task table number of tasks the task counter, be used in reference to number of tasks pointer, be used in reference to the task list pointer of task list and one and be used to realize that run entity carries out the mutual exclusion lock that mutual exclusion is read to overall task list to the task counter, described task entity comprises task type and parameter information.
The described process that writes task in overall task list may further comprise the steps:
B1, described mutual exclusion lock is locked;
B2, read the number of tasks in the task counter, and number of tasks is added 1;
B3, obtain the first address of task list,, obtain the memory location of task to be written the long-pending of described number of tasks of step b2 and task physical size and the addition of task list first address by the task list pointer;
B4, the task type and the parameter information of task to be written write the described memory location of step b3 in the task list;
B5, with described mutual exclusion lock release.
The concurrent detailed process that reads overall task list of described idle run entity mutual exclusion comprises:
C1, described mutual exclusion lock is locked;
C2, judge in the described overall task list whether task to be read is arranged,, then read this task, and this task is deleted from overall task list, and described task counter is subtracted 1, carry out next step if having; Otherwise, directly carry out next step;
C3, with described mutual exclusion lock release;
The described concurrent idle run entity that reads overall task list is: the idle run entity after handling current task in the cluster management server and having discharged all resources perhaps receives the idle run entity of reading order in the cluster management server.
The transmit leg of described reading order is: the run entity that writes task in the cluster management server to overall task list.
Described run entity is process or thread.
A kind of method to machine group integration management is characterized in that, this method comprises:
Run entity in d1, the cluster management server is according to the task request message from group of planes client that listens to, judge whether this task is the big task that independent subtask constitutes more than, if, then described task is decomposed into independent subtask more than, and described subtask write in the predefined overall task list, and carry out next step; Otherwise, obtain the resource example that described task is used, and according to the described task of old process normal process, and process ends;
Idle run entity mutual exclusion in d2, the cluster management server is concurrent reads above-mentioned overall task list, and reading of task is resolved, and according to analysis result, calls corresponding with it task processing module and carries out conventional processing.
Steps d 2 is described according to analysis result, call corresponding with it task processing module after, further comprise:
Described task processing module judges whether to need to carry out the task that reads once more, if, then again this task is carried out conventional processing, otherwise, direct process ends.
Described overall task list comprises: be used for depositing the task entity task list, be used to preserve the current task table number of tasks the task counter, be used in reference to number of tasks pointer, be used in reference to the task list pointer of task list and one and be used to realize that run entity carries out the mutual exclusion lock that mutual exclusion is read to overall task list to the task counter, described task entity comprises task type and parameter information.
The described process that writes task in overall task list may further comprise the steps:
E1, described mutual exclusion lock is locked;
E2, read the number of tasks in the task counter, and this number of tasks is added 1;
E3, obtain the first address of task list,, obtain the memory location of task to be written the long-pending of described number of tasks of step e2 and task physical size and the addition of task list first address by the task list pointer;
E4, the task type and the parameter information of task to be written write the described memory location of step e3 in the task list;
E5, with described mutual exclusion lock release.
The concurrent detailed process that reads overall task list of idle run entity mutual exclusion in the described cluster management server comprises:
F1, described mutual exclusion lock is locked;
F2, judge in the described overall task list whether task to be read is arranged,, then read this task, and this task is deleted from overall task list, and described task counter is subtracted 1, carry out next step if having; Otherwise, directly carry out next step;
F3, with described mutual exclusion lock release.
The described concurrent idle run entity that reads overall task list is: the idle run entity after handling current task in the cluster management server and having discharged all resources perhaps receives the idle run entity of reading order in the cluster management server.
The transmit leg of described reading order is: the run entity that writes task in the cluster management server to overall task list.
Described run entity is process or thread.
The method of the described task of flow processing routinely is: for the resource example locks, and utilize the resource example that obtains that task is handled.
Because adopting, method of the present invention comprises the run entity pool structure of a plurality of run entities as the Basic application structure, in conjunction with fine granularity mechanism flexibly, and the concurrent efficiently task management mechanism of employing realizes the concentrated concurrency management to a group of planes, when the resource example conflict of multitask, the part task postpone of resource contention is handled, concurrent processing does not have the task of resource contention, thereby can solve the problem that the efficiency of management that a plurality of run entities cause because of the resource contention of task reduces effectively, make that the holistic management process of system is rapider; When big task such as breaking down, can make a plurality of independently run entities participate in the recovery of fault simultaneously, make full use of the processing resource of a plurality of nodes, reduced fault handling T.T. greatly, improved the availability of using.In addition, because dynamically the run entity pond of adjusting makes management server have good scalability, therefore can satisfy the regulatory requirement of the group of planes of various scales.Because above-mentioned advantage, method of the present invention is specially adapted to the computer network environment of complexity that the efficiency of management, application availability are had relatively high expectations.
Description of drawings
The Network of Workstation topological diagram of Fig. 1 for adopting the centralized control strategy to manage;
Fig. 2 is the process structure figure of prior art one;
Fig. 3 is the process pool structural drawing of prior art two;
Fig. 4 is the procedural sequences execution graph that adopts the fine granularity lock mechanism in the prior art two;
Fig. 5 is the storage node composition of described overall task list;
Fig. 6 is the structural drawing of described task entity;
Fig. 7 is the process flow diagram of first kind of embodiment of the present invention;
Fig. 8 is the process flow diagram of all available resources examples in the described acquisition subset of resources;
Fig. 9 is the program structure diagram of described task parsing module and task processing module;
Figure 10 is the process flow diagram of second embodiment of the invention;
Synoptic diagram when Figure 11 handles two task resource conflicts for second embodiment of the present invention;
Figure 12 is the process flow diagram of third embodiment of the invention;
Synoptic diagram when Figure 13 handles big task requests for the 3rd embodiment of the present invention.
Embodiment
Further specify implementation method of the present invention below in conjunction with the drawings and specific embodiments.
Method of the present invention realizes concentrated concurrency management to a group of planes in conjunction with a kind of concurrency management of task flexibly mechanism on the basis of adopting existing run entity pool structure and fine granularity mechanism, thus guarantee to a group of planes realize efficiently, management reliably.
Described run entity is: process or thread.Be that example illustrates described embodiment below with the process.
In management server, set up an overall task list in advance, described overall task list is the set of a sets of data structure, as shown in Figure 5, this overall situation task list is made up of five parts: task counter (task_counter) 501, task list (task_list) 502, number of tasks pointer 503, task list pointer 504 and a mutual exclusion lock 505.Task counter 501 is preserved is number of tasks in the current task table 502; Task list 502 is set of a series of task entities (task_entity), the corresponding task independently of each task entity.Task counter 501 and task list 502 are kept in the shared storage area of being created when the management server initialization by the main control process.Number of tasks pointer 503 and task list pointer 504 are global variables of two pointer type, point to task counter 501 and task list 502 respectively.In host process space 506, these two global variable pointers when management server is initialized by assignment; The copy of global variable has all been preserved in all subprocess spaces 507, and the establishment of this copy is finished by the system call self of the establishment process of operating system.Described mutual exclusion lock 505 is used for realizing mutual exclusion concurrent operations overall situation task list between process, described being operating as to overall task list writes task or reads task from overall task list, if certain process will be operated overall task list, then earlier this mutual exclusion lock is locked, other processes just can not be operated overall task list again, after process operation finishes again with this mutual exclusion lock release.Realize that thus being always one process operates overall task list, so can not cause the conflict of the overall task list of operation between process.
Fig. 6 is the structural drawing of a task entity, and as shown in Figure 6, the task entity comprises task type field 601, number of parameters field 602 and several parameters field 603.Described task type is the sign information that is used to distinguish the different task entity.Because in a group of planes, exist the task entity that various needs carry out different disposal, so each task type all corresponding one treatment scheme, each treatment scheme corresponding stored is in the task processing module.The information of described parameter in a task handling process, needing.Because different task entity parameters needed numbers may be different, therefore described number of parameters field 602 is used to preserve the required number of parameters of this task entity.The mode of preserving parameter is: get in all task entities, the maximal value MAX that comprises the task parameters number, and make any task entity all have the same memory space that to hold MAX task parameters, simultaneously, increase a number of parameters field 602 in the pragmatic body in office, be used for writing down the number of the task parameters field of actual use.Because the type of task parameters also has difference, for a kind of general memory mechanism is provided, management server is all preserved all task parameters with character string forms, and the conversion of specific tasks parameter type is then solved by each task handling flow process oneself.
Below be the concrete implementing procedure of a preferred embodiment of the present invention, as shown in Figure 7, this flow process comprises:
A plurality of processes in step 701, the process pool are in the state of intercepting simultaneously, and the process that listens to task requests in the process pool enters into the Processing tasks state; The process that does not listen to task requests in the process pool still is in the state of intercepting.
Step 702, the described process that listens to task requests are determined the used subset of resources of this task according to the content of task requests, and obtain all available resource examples in this subset of resources.
Step 703, judge whether the resource example in the subset of resources that described task uses is all available, whether the number of promptly judging all available resources examples is with all the number of resource examples is identical in the subset of resources, if then execution in step 710; Otherwise, carry out next step;
Step 704, the part task of related conflicted resource in the described task is reduced, simultaneously reduction being fallen task creation is a new task, and it is medium pending that this new task is write described overall task list.
Whether step 705, the described process that enters the Processing tasks state are judged described is a big task that need split through the task after reducing, if, then carry out next step, otherwise, execution in step 708;
Describedly judge whether task needs to split, and has different judgment mechanisms according to system configuration, for example, if a task is made up of subtask separate more than, then this task is the big task of a need fractionation.
Step 706, described task is decomposed into a plurality of subtasks, and it is medium pending that described subtask is write described overall task list, and a plurality of processes that equate with subtask quantity in process pool and be in the state of intercepting send the order of reading overall task list.
After a plurality of processes in step 707, the above-mentioned process pool were received the order of reading overall task list, mutual exclusion is concurrent read overall task list, reading of task is resolved execution in step 712.
Step 708, the task after reducing is handled according to old process, is promptly obtained the resource example of reducing the back task, for the resource example locks, and the task after utilizing the resource example that obtains to described reduction routinely flow process handle.
Step 709, described process are being handled current task, and after having discharged all resources that take, and are after shared resource example separates the fine granularity lock, and initiatively mutual exclusion is concurrent reads overall task list, reading of task is resolved execution in step 712.
Step 710, the described process that enters the Processing tasks state judge whether described task is a big task that need split, if then execution in step 706, otherwise, carry out next step;
Step 711, the described task of flow processing are routinely promptly obtained the resource example of described task, for the resource example locks, and utilize these resource examples described task to be handled process ends.
Step 712 and according to analysis result is determined the used subset of resources of the task of reading, and obtains all available resources examples in this subset of resources, and judges whether all the resource examples in this subset of resources are all available, if then carry out next step; Otherwise, return step 704.
Step 713, call with the corresponding task processing module of described reading of task and carry out conventional processing.
In the above-mentioned steps 702, the described detailed process of obtaining all available resources examples in the subset of resources comprises as shown in Figure 8:
Step 801, obtain the fine granularity lock state of all resource examples in the described subset of resources;
Step 802, read the fine granularity lock state of the resource example in the described subset of resources successively, and judge whether the current resource example that reads is in out fine granularity lock state, if, illustrate that then the resource example that this resource example and other tasks are used does not conflict, promptly this resource example can be used, obtain this resource example this moment, and execution in step 803; Otherwise, illustrate that the respective resources example that this resource example and other tasks are used clashes, promptly this resource example is unavailable, and delete this resource example this moment from this subset of resources, and execution in step 803 again.
Step 803, judge whether the current resource example that reads is last resource example in the subset of resources, if finish to obtain flow process, the output subset of resources; Otherwise, continue execution in step 802, read next resource example.
Available resources example in the described subset of resources can be kept to step 803 by above-mentioned step 801, and filter out unavailable resource example, thereby obtain all available resource examples in the subset of resources.
The described detailed process that task is write overall task list of above-mentioned steps 704 and step 706 is:
A, described mutual exclusion lock is locked;
B, obtain the number of tasks of task_counter, and the number of tasks of task_counter is added 1 by the number of tasks pointer;
C, calculate the memory location of new task by task list pointer and task counter, concrete computing method are: the task list first address adds that number of tasks and task physical size are long-pending;
D, the task type and the parameter information of new task write the task entity;
E, with described mutual exclusion lock release.
The method of the invention on the concurrency management server, disposed the task parsing module and with every kind of corresponding task processing module of task type, as shown in Figure 9, described task parsing module 901 is used to the type of the task of resolving, and calls 902 pairs of tasks of corresponding task processing module according to this task type and handle.
The concurrent detailed process that reads task in the overall task list of the described process exclusive of above-mentioned steps 707 and step 709 comprises:
A, described mutual exclusion lock is locked;
B, judge in the described overall task list whether task is arranged,, then read this task, and this task is deleted from overall task list, and task_counter is subtracted 1, carry out next step if having; Otherwise, directly carry out next step;
C, with described mutual exclusion lock release;
Though mutual exclusion lock can guarantee to have only a process to obtain task executions power, but the obstruction of other processes of bringing therefrom also is a kind of waste to the management server resource, therefore, before resolving and handling reading of task, reading of task is deleted from overall task list, prevent that with this a plurality of processes from handling identical task simultaneously.
In the above-mentioned steps 712, reading of task is resolved, and according to analysis result, calls corresponding with it task processing module and carry out processed conventionally concrete steps and be:
D1, task parsing module 901 read the task type of task entity, call the pairing task processing module of this task type according to this task type from task processing module group 902;
D2, described task processing module judge whether to need to carry out reading of task once more, if, then carry out the concrete treatment scheme in this module, described task is handled, otherwise, directly withdraw from;
The reason of carrying out this step is: when task really was performed, the environment of a group of planes is compared when generating this task, and variation may take place.If no longer need to carry out this task, can directly withdraw from.
The all corresponding a kind of task processing module of each above-mentioned task type, every kind of all corresponding a kind of specific treatment scheme of task processing module, this treatment scheme is decided by concrete task, and method of the present invention is just called this treatment scheme.
The foregoing description is a kind of preferred embodiment of the present invention, and the method for the invention also can have following examples:
As shown in figure 10, the idiographic flow of second embodiment of the invention comprises:
A plurality of processes in step 1001, the process pool are in the state of intercepting simultaneously, and the process that listens to task requests in the process pool enters into the Processing tasks state; The process that does not listen to task requests in the process pool still is in the state of intercepting.
Step 1002, the described process that listens to task requests are determined the used subset of resources of this task according to the content of task requests, and obtain all available resource examples in this subset of resources;
Step 1003, judge whether the resource example in the subset of resources that described task uses is all available, whether the number of promptly judging all available resources examples is identical with the number of whole resource examples in the subset of resources, if, the described task of flow processing routinely then, process ends; Otherwise, carry out next step;
Step 1004, the part task of related unavailable resource example in the described task is reduced, be a new task with reducing the part task creation that falls simultaneously, it is medium pending that this new task is write an overall task list, the task of being reduced is handled according to normal mode, the task after promptly utilizing available resource example to described reduction flow process is routinely handled again;
Step 1005, described process are after handling current task and having discharged all resources that take, and initiatively mutual exclusion is concurrent reads overall task list, and reading of task is resolved;
Step 1006, according to analysis result, judge whether the resource example in the subset of resources that described task uses is all available, whether the number of promptly judging all available resources examples is identical with the number of whole resource examples in the subset of resources, if then carry out next step; Otherwise, return step 1004.
Step 1007, call corresponding with it task processing module and carry out conventional processing
Below by a concrete application example above-mentioned second embodiment is described.
Supposed node 1 deploy the service 1 with the service 2, node 2 deploy the service 2 and the service 3, node 1 and node 2 successively add a group of planes and send the task of request service to centralized manager, at this moment, two tasks include service 2 in the task requests that node 1 and node 2 send, so can cause conflict when utilizing the resource of service 2.
Figure 11 is the synoptic diagram during above-mentioned two task resources conflict for second embodiment of the present invention handles, and as shown in figure 11, this processing procedure is:
Step 1101, node 1 send the request service role of " request service 1 and service 2 " to process pool;
Process 1 in step 1102, the process pool listens to this request service role, be converted into busy condition by the state of intercepting, get access to available resources, promptly serve 1 with service 2 status information, suppose that all resources of this task can use, therefore be the status information refinement granularity lock of service 1, and carry out next step with service 2, to this task routinely flow process handle;
Step 1103, process 1 send to node 1 and can start service 1 and serve 2 distribution services information, and wait for and start the result;
Step 1104, node 2 send the request service role of " request service 2 and service 3 " to process pool;
Process 2 in step 1105, the process pool listens to this request service role, is converted into busy condition by the state of intercepting, and obtains available resource, promptly serves 3 status information, and is the status information refinement granularity lock of service 3.Because the status information of service 2 by process 1 refinement granularity lock, so the part resource of this task is unavailable, therefore, is carried out next step;
Step 1106, process 2 reduce the service in the described request service role 2 fall, and only send the service assignment information of the service 3 that can start to node 2, and wait for the startup result;
Step 1107, process 2 generate a new task, and this new task is: start service 2, and it is medium pending that this new task is write described overall task list.The purpose of this step is to prevent to serve 2 when node 1 starts failure, causes serving 2 unavailable.
The startup result that step 1108, node 1 and node 2 will be served sends to process 1 and process 2 respectively, and mutual exclusion is concurrent as the case may be reads overall task list for process 1 and process 2:
If process 1 is handled task earlier and is discharged resource, then process 1 reads the new task in the overall task list earlier, after reading new task, this new task is deleted from overall task list, owing to there is not resource contention this moment, therefore process 1 is called startup service role processing module and is handled this new task, in this example, the treatment scheme of described startup service role processing module is: judge whether service 2 starts, if then process 1 is not carried out described new task, otherwise, carry out described new task, promptly start service 2.
If process 2 is handled task earlier and is discharged resource, then process 2 reads overall task list earlier, after reading described new task, it is deleted from overall task list, and judge whether this new task has resource contention, again since this moment process 1 also at Processing tasks, the status information of service 2 still is in locking state, therefore this new task has resource contention, to serve 2 reduces, and 2 create a new task again for service, this new task is: start service 2, and it is to be read that this new task is write overall task list etc.Because after will serve 2 reduction, without any task, so process 2 do not remake any processing, enters into the state of intercepting in reading of the task; After process 1 is finished the work and is discharged resource, read the new task in the overall task list, read new task after, this new task is deleted from overall task list, and transfer startup service role processing module to handle this new task, in this example, the treatment scheme of described startup service role processing module is: judge whether service 2 has been in running status, if, then process 1 is not carried out described new task, otherwise, carry out described new task, promptly start service 2.
In above-mentioned application example, include service 2 in the task requests that node 1 and node 2 send, so two tasks have caused conflict when utilizing the resource of service 2, if method by prior art, these two tasks are carried out serialized ordering, carry out successively, then can make the efficiency of management low, and method of the present invention reduce the afoul service 2 of sending with node 1 in the task of node 2 transmissions of task fall, continue to carry out service 3, and will generate a new task to serving 2 processing, and write overall task list, wait pending, process is handled the new task in the overall task list after the current task processing is finished again.This method has been given full play to available resources, has improved efficient.
As shown in figure 12, the idiographic flow of third embodiment of the invention comprises:
A plurality of processes in step 1201, the process pool are in the state of intercepting simultaneously, and the process that listens to task requests in the process pool enters into the Processing tasks state; The process that does not listen to task requests in the process pool still is in the state of intercepting.
Step 1202, the described process that listens to task requests judge that whether described task is one to need the big task that splits, if, then carry out next step, otherwise, to described task routinely flow process handle;
Step 1203, described task is decomposed into a plurality of subtasks, and it is medium pending that described subtask is write an overall task list, and a plurality of processes that equate with subtask quantity in process pool and be in the state of intercepting send the order of reading overall task list.
After a plurality of processes in step 1204, the above-mentioned process pool were received the order of reading overall task list, mutual exclusion is concurrent read overall task list, and reading of task is resolved and handled, process ends.
Below by a concrete application example above-mentioned the 3rd embodiment is described.
For example on node 2, moved three services, service 1, service 2 and service 3, node 2 breaks down.Break down owing to node 2 this moment, then serve 1, service 2 and serve 3 and all can not normally use, therefore this request service role comprises three independent subtasks, is respectively: " serving 1 fault recovery ", " serving 2 fault recoveries " and " serving 3 fault recoveries ".
Figure 13 is the synoptic diagram during above-mentioned big task requests for the 3rd embodiment of the present invention handles, and as shown in figure 13, this processing procedure is:
Step 1301, node 1 send the request service role of " node 2 fault handlings " to process pool.
Process 1 in step 1302, the process pool listens to this request service role, because this task comprises service 1 fault handling, service 2 fault handlings and service 3 fault handlings, therefore judge that this request service role is a big task, should big task be decomposed into three subtasks, and in overall task list, write this three subtasks, be respectively: " serving 1 fault recovery ", " serving 2 fault recoveries " and " serving 3 fault recoveries ";
In process pool other of step 1303, process 1 is in process 2, process 3 and the process 4 of the state of intercepting, and sends the order of reading overall task list, notifies them to read overall task list;
Step 1304, receive that process 2, process 3 and process 4 mutual exclusions of reading overall task list signal are concurrent and read overall task list;
Step 1305, process 2, process 3 and process 4 have been read above-mentioned three new tasks respectively, resolve through the task parsing module, call corresponding task processing module respectively above-mentioned three new tasks are handled, promptly to the fault of serving 1 fault, service 2 with serve 3 fault and recover.
In above-mentioned application example, the request service role of " node 2 breaks down " is a big task, comprise " serving 1 fault recovery ", " serving 2 fault recoveries " and " serving 3 fault recoveries " three subtasks,, handle this big task by one process if press the method for prior art, lose time, inefficiency, and method of the present invention is decomposed into three subtasks with big task, carries out this three subtasks respectively simultaneously by three processes, save the time, improved efficient.
The above; only for the preferable embodiment of the present invention, but protection scope of the present invention is not limited thereto, and anyly is familiar with the people of this technology in the disclosed technical scope of the present invention; the variation that can expect easily or replacement all should be encompassed within protection scope of the present invention.

Claims (22)

1, a kind of method to machine group integration management is characterized in that, this method comprises:
The task request message that run entity in a, the cluster management server listens to from group of planes client;
B, obtain the subset of resources of described required by task, and judge whether the resource example in this subset of resources is all available, if then handle described task, process ends according to old process according to described task requests; Otherwise, reduce the part task that relates to unavailable resource example in the described task, be new task with reducing the part task creation that falls simultaneously, and described new task is write in the overall task list that sets in advance;
C, handle described task through reducing according to old process;
Idle run entity mutual exclusion in d, the cluster management server is concurrent reads above-mentioned overall task list, and reading of task is resolved, and according to analysis result, calls corresponding with it task processing module and carries out conventional processing.
2, the method for claim 1 is characterized in that, step c is described described through before the task of reducing according to the old process processing, further comprises:
Judge whether described task through reduction is the big task that is made of independent subtask more than, if, then described task is decomposed into independent subtask more than, and described subtask is write in the predefined overall task list, directly execution in step d; Otherwise, handle described task according to old process again through reducing.
3, method as claimed in claim 1 or 2 is characterized in that, the described idle run entity mutual exclusion of steps d is concurrent reads above-mentioned overall task list, and after reading of task resolved, further comprises:
Judge whether the used resource of task that reads is all available, if then continue to carry out subsequent step; Otherwise, reduce the part task that relates to unavailable resource example in the described task, be new task with reducing the part task creation that falls simultaneously, and described new task is write in the overall task list that sets in advance, return step c.
4, method as claimed in claim 1 or 2 is characterized in that, steps d is described according to analysis result, call corresponding with it task processing module after, further comprise:
Described task processing module judges whether to need to carry out described reading of task once more, if, then again this task is carried out conventional processing, otherwise, direct process ends.
5, method as claimed in claim 1 or 2, it is characterized in that, step b is described to judge whether all available method is the resource example in the subset of resources: obtain all available resource examples in the described subset of resources, whether the number of judging all available resources examples is identical with the number of whole resource examples in the subset of resources, if, resource example in the then described subset of resources is all available, otherwise, unavailable resource example is arranged in the described subset of resources.
6, method as claimed in claim 5 is characterized in that, the detailed process of obtaining all available resources examples in the described subset of resources is:
A1, obtain the fine granularity lock state of all resource examples in the described subset of resources;
A2, read the fine granularity lock state of the resource example in the described subset of resources successively, and judge whether the current resource example that reads is in out fine granularity lock state, if, then obtain this resource example, and execution in step a3; Otherwise, this resource example is deleted execution in step a3 from this subset of resources;
A3, judge whether the current resource example that reads is last resource example in the subset of resources, if finish to obtain flow process; Otherwise, continue execution in step a2, read next resource example.
7, method as claimed in claim 1 or 2, it is characterized in that, described overall task list comprises: be used for depositing the task entity task list, be used to preserve the current task table number of tasks the task counter, be used in reference to number of tasks pointer, be used in reference to the task list pointer of task list and one and be used to realize that run entity carries out the mutual exclusion lock that mutual exclusion is read to overall task list to the task counter, described task entity comprises task type and parameter information.
8, method as claimed in claim 7 is characterized in that, the described process that writes task in overall task list may further comprise the steps:
B1, described mutual exclusion lock is locked;
B2, read the number of tasks in the task counter, and number of tasks is added 1;
B3, obtain the first address of task list,, obtain the memory location of task to be written the long-pending of described number of tasks of step b2 and task physical size and the addition of task list first address by the task list pointer;
B4, the task type and the parameter information of task to be written write the described memory location of step b3 in the task list;
B5, with described mutual exclusion lock release.
9, method as claimed in claim 7 is characterized in that, the concurrent detailed process that reads overall task list of described idle run entity mutual exclusion comprises:
C1, described mutual exclusion lock is locked;
C2, judge in the described overall task list whether task to be read is arranged,, then read this task, and this task is deleted from overall task list, and described task counter is subtracted 1, carry out next step if having; Otherwise, directly carry out next step;
C3, with described mutual exclusion lock release;
10, method as claimed in claim 1 or 2, it is characterized in that, the described concurrent idle run entity that reads overall task list is: the idle run entity after handling current task in the cluster management server and having discharged all resources perhaps receives the idle run entity of reading order in the cluster management server.
11, method as claimed in claim 10 is characterized in that, the transmit leg of described reading order is: the run entity that writes task in the cluster management server to overall task list.
12, method as claimed in claim 1 or 2 is characterized in that, described run entity is process or thread.
As the method for claim 1 or 2, it is characterized in that 13, the method for the described task of flow processing routinely is:, and utilize the resource example that obtains that task is handled for the resource example that obtains locks.
14, a kind of method to machine group integration management is characterized in that, this method comprises:
Run entity in d1, the cluster management server is according to the task request message from group of planes client that listens to, judge whether this task is the big task that independent subtask constitutes more than, if, then described task is decomposed into independent subtask more than, and described subtask write in the predefined overall task list, and carry out next step; Otherwise, obtain the resource example that described task is used, according to the described task of old process normal process, and process ends;
Idle run entity mutual exclusion in d2, the cluster management server is concurrent reads above-mentioned overall task list, and reading of task is resolved, and according to analysis result, calls corresponding with it task processing module and carries out conventional processing.
As the arbitrary described method of claim 14, it is characterized in that 15, steps d 2 is described according to analysis result, call corresponding with it task processing module after, further comprise:
Described task processing module judges whether to need to carry out the task that reads once more, if, then again this task is carried out conventional processing, otherwise, direct process ends.
16, method as claimed in claim 14, it is characterized in that, described overall task list comprises: be used for depositing the task entity task list, be used to preserve the current task table number of tasks the task counter, be used in reference to number of tasks pointer, be used in reference to the task list pointer of task list and one and be used to realize that run entity carries out the mutual exclusion lock that mutual exclusion is read to overall task list to the task counter, described task entity comprises task type and parameter information.
17, method as claimed in claim 16 is characterized in that, the described process that writes task in overall task list may further comprise the steps:
E1, described mutual exclusion lock is locked;
E2, read the number of tasks in the task counter, and this number of tasks is added 1;
E3, obtain the first address of task list,, obtain the memory location of task to be written the long-pending of described number of tasks of step e2 and task physical size and the addition of task list first address by the task list pointer;
E4, the task type and the parameter information of task to be written write the described memory location of step e3 in the task list;
E5, with described mutual exclusion lock release.
18, method as claimed in claim 16 is characterized in that, the concurrent detailed process that reads overall task list of idle run entity mutual exclusion in the described cluster management server comprises:
F1, described mutual exclusion lock is locked;
F2, judge in the described overall task list whether task to be read is arranged,, then read this task, and this task is deleted from overall task list, and described task counter is subtracted 1, carry out next step if having; Otherwise, directly carry out next step;
F3, with described mutual exclusion lock release.
19, method as claimed in claim 14, it is characterized in that, the described concurrent idle run entity that reads overall task list is: the idle run entity after handling current task in the cluster management server and having discharged all resources perhaps receives the idle run entity of reading order in the cluster management server.
20, method as claimed in claim 19 is characterized in that, the transmit leg of described reading order is: the run entity that writes task in the cluster management server to overall task list.
As the arbitrary described method of claim 14 to 20, it is characterized in that 21, described run entity is process or thread.
22, method as claimed in claim 14 is characterized in that, the method for the described task of flow processing routinely is: for the resource example that obtains locks, and utilize the resource example that obtains that task is handled.
CNB2004100086738A 2004-03-16 2004-03-16 A method for implementing centralized concurrent management to cluster Expired - Fee Related CN1315045C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2004100086738A CN1315045C (en) 2004-03-16 2004-03-16 A method for implementing centralized concurrent management to cluster

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2004100086738A CN1315045C (en) 2004-03-16 2004-03-16 A method for implementing centralized concurrent management to cluster

Publications (2)

Publication Number Publication Date
CN1670705A true CN1670705A (en) 2005-09-21
CN1315045C CN1315045C (en) 2007-05-09

Family

ID=35041978

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2004100086738A Expired - Fee Related CN1315045C (en) 2004-03-16 2004-03-16 A method for implementing centralized concurrent management to cluster

Country Status (1)

Country Link
CN (1) CN1315045C (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101984404A (en) * 2010-11-04 2011-03-09 东莞宇龙通信科技有限公司 Method and device for controlling parallel running of application programs and terminal having device
CN102043669A (en) * 2010-12-22 2011-05-04 中国农业银行股份有限公司 Workflow concurrency mode control method and device
CN101370024B (en) * 2007-08-15 2012-10-31 北京灵图软件技术有限公司 Distributed information collection method and system
CN101201753B (en) * 2007-12-13 2012-12-26 浪潮通信信息***有限公司 Method for configuring and managing multimode machine supervising engine
CN102880503A (en) * 2012-08-24 2013-01-16 新浪网技术(中国)有限公司 Data analysis system and data analysis method
CN101140528B (en) * 2007-08-31 2013-03-20 中兴通讯股份有限公司 Method and device for realizing timing tasks load in cluster
CN103577594A (en) * 2013-11-13 2014-02-12 广州市久邦数码科技有限公司 Lock screen interface information pushing system and method
CN103699363A (en) * 2013-12-13 2014-04-02 华中科技大学 Method for optimizing key critical zone under multi-core platform
CN103984544A (en) * 2014-04-24 2014-08-13 浪潮电子信息产业股份有限公司 Method for preventing tasks from being simultaneously executed
CN105959238A (en) * 2016-06-28 2016-09-21 深圳市彬讯科技有限公司 Dispatching method and system for multi-priority rendering resources based on message queues
CN106095585A (en) * 2016-06-22 2016-11-09 中国建设银行股份有限公司 Task requests processing method, device and enterprise information system
CN110569115A (en) * 2019-08-15 2019-12-13 贝壳技术有限公司 Multipoint deployment process management method and process contention method
CN111385255A (en) * 2018-12-28 2020-07-07 北京金山云网络技术有限公司 Asynchronous call implementation method and device, server and server cluster
CN111949471A (en) * 2019-05-16 2020-11-17 珠海格力电器股份有限公司 Monitoring control method for terminal process

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7299466B2 (en) * 2001-12-20 2007-11-20 Cadence Design Systems, Inc. Mechanism for managing execution environments for aggregated processes
CN1266604C (en) * 2002-07-02 2006-07-26 联想(北京)有限公司 Management system for large scale scale heterogeneous cluster

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101370024B (en) * 2007-08-15 2012-10-31 北京灵图软件技术有限公司 Distributed information collection method and system
CN101140528B (en) * 2007-08-31 2013-03-20 中兴通讯股份有限公司 Method and device for realizing timing tasks load in cluster
CN101201753B (en) * 2007-12-13 2012-12-26 浪潮通信信息***有限公司 Method for configuring and managing multimode machine supervising engine
CN101984404B (en) * 2010-11-04 2014-02-05 东莞宇龙通信科技有限公司 Method and device for controlling parallel running of application programs and terminal having device
CN101984404A (en) * 2010-11-04 2011-03-09 东莞宇龙通信科技有限公司 Method and device for controlling parallel running of application programs and terminal having device
CN102043669A (en) * 2010-12-22 2011-05-04 中国农业银行股份有限公司 Workflow concurrency mode control method and device
CN102043669B (en) * 2010-12-22 2013-01-09 中国农业银行股份有限公司 Workflow concurrency mode control method and device
CN102880503B (en) * 2012-08-24 2015-04-15 新浪网技术(中国)有限公司 Data analysis system and data analysis method
CN102880503A (en) * 2012-08-24 2013-01-16 新浪网技术(中国)有限公司 Data analysis system and data analysis method
CN103577594A (en) * 2013-11-13 2014-02-12 广州市久邦数码科技有限公司 Lock screen interface information pushing system and method
CN103699363A (en) * 2013-12-13 2014-04-02 华中科技大学 Method for optimizing key critical zone under multi-core platform
CN103984544A (en) * 2014-04-24 2014-08-13 浪潮电子信息产业股份有限公司 Method for preventing tasks from being simultaneously executed
CN106095585A (en) * 2016-06-22 2016-11-09 中国建设银行股份有限公司 Task requests processing method, device and enterprise information system
CN106095585B (en) * 2016-06-22 2019-08-30 中国建设银行股份有限公司 Task requests processing method, device and enterprise information system
CN105959238A (en) * 2016-06-28 2016-09-21 深圳市彬讯科技有限公司 Dispatching method and system for multi-priority rendering resources based on message queues
CN111385255A (en) * 2018-12-28 2020-07-07 北京金山云网络技术有限公司 Asynchronous call implementation method and device, server and server cluster
CN111385255B (en) * 2018-12-28 2022-05-06 北京金山云网络技术有限公司 Asynchronous call implementation method and device, server and server cluster
CN111949471A (en) * 2019-05-16 2020-11-17 珠海格力电器股份有限公司 Monitoring control method for terminal process
CN110569115A (en) * 2019-08-15 2019-12-13 贝壳技术有限公司 Multipoint deployment process management method and process contention method
CN110569115B (en) * 2019-08-15 2023-08-04 贝壳技术有限公司 Multi-point deployment process management method and process competing method

Also Published As

Publication number Publication date
CN1315045C (en) 2007-05-09

Similar Documents

Publication Publication Date Title
CN1315045C (en) A method for implementing centralized concurrent management to cluster
CN1264078A (en) Computer for executing multiple operation systems
US7810099B2 (en) Optimizing workflow execution against a heterogeneous grid computing topology
CN1292346C (en) System and method for performing task in distributing calculating system structure
CN1908903A (en) System and method for executing job step, and computer product
CN1917504A (en) Method for preventing deadlock caused by accessing shared data of resources
CN101055533A (en) Multithreading processor dynamic EMS memory management system and method
US11119911B2 (en) Garbage collection method and device
CN101031886A (en) Network system, management computer, cluster management method, and computer program
CN101079902A (en) A great magnitude of data hierarchical storage method
CN106033373A (en) A method and a system for scheduling virtual machine resources in a cloud computing platform
CN1906580A (en) Method and system for a grid-enabled virtual machine with movable objects
CN1658145A (en) Scalable print spooler
CN1791037A (en) Method for realizing Web service automatic test
CN1710535A (en) GIS application system and development method thereof
CN1874538A (en) Concurrent method for treating calling events
CN1975655A (en) Method and apparatus for managing access to storage
CN101079736A (en) Modeled network resource positioning method
CN1889048A (en) Package realizing method and system
CN1889045A (en) Cocurrent event processing device and method in multi-task software system
CN1754704A (en) Multiple function image formation device and method thereof
CN1101573C (en) Computer system
CN1738244A (en) Method for setting application server by proxy server in soft switching system
CN101064619A (en) Resource management platform having topics distribution function and its method
CN101030152A (en) Operation controller and controlling method based on pseudo-clock mode

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20070509

Termination date: 20210316

CF01 Termination of patent right due to non-payment of annual fee