CN116541166A - Super-computing power scheduling server and resource management method - Google Patents
Super-computing power scheduling server and resource management method Download PDFInfo
- Publication number
- CN116541166A CN116541166A CN202310453599.3A CN202310453599A CN116541166A CN 116541166 A CN116541166 A CN 116541166A CN 202310453599 A CN202310453599 A CN 202310453599A CN 116541166 A CN116541166 A CN 116541166A
- Authority
- CN
- China
- Prior art keywords
- scheduling
- data
- server
- information
- node
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007726 management method Methods 0.000 title claims abstract description 30
- 230000002068 genetic effect Effects 0.000 claims abstract description 6
- 238000012502 risk assessment Methods 0.000 claims abstract description 6
- 230000005540 biological transmission Effects 0.000 claims description 19
- 239000013598 vector Substances 0.000 claims description 19
- 238000000034 method Methods 0.000 claims description 16
- 238000012549 training Methods 0.000 claims description 16
- 238000007906 compression Methods 0.000 claims description 15
- 230000006835 compression Effects 0.000 claims description 15
- 238000013527 convolutional neural network Methods 0.000 claims description 13
- 230000006870 function Effects 0.000 claims description 12
- 238000012545 processing Methods 0.000 claims description 10
- 238000005457 optimization Methods 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 8
- 238000013528 artificial neural network Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 7
- 238000011176 pooling Methods 0.000 claims description 4
- 238000012216 screening Methods 0.000 claims description 4
- 238000012546 transfer Methods 0.000 claims description 4
- 206010000117 Abnormal behaviour Diseases 0.000 claims description 3
- 230000002159 abnormal effect Effects 0.000 claims description 3
- 230000006399 behavior Effects 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims description 3
- 210000002569 neuron Anatomy 0.000 claims description 3
- 230000010354 integration Effects 0.000 claims description 2
- 230000002452 interceptive effect Effects 0.000 claims description 2
- 238000010606 normalization Methods 0.000 claims description 2
- 230000002618 waking effect Effects 0.000 claims description 2
- 238000004891 communication Methods 0.000 abstract description 2
- 238000009472 formulation Methods 0.000 abstract description 2
- 239000000203 mixture Substances 0.000 abstract description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a super computing power dispatching server and a resource management method, which belong to the technical field of communication, wherein the management method comprises the following specific steps: (1) planning a server resource scheduling path; (2) Constructing a classification map library and classifying and integrating each data resource; (3) Constructing and updating an original scheduling strategy according to the demands of staff; (4) Recording scheduling information, performing risk analysis on the scheduling information and simultaneously optimizing the performance of the platform; the invention optimizes the path continuously through the genetic algorithm, can greatly reduce the use limitation, greatly improve the resource scheduling efficiency of the server, can automatically model and search parameters, does not need manual operation of staff, reduces the use difficulty, simultaneously improves the resource scheduling management efficiency, and improves the efficiency of scheduling strategy formulation.
Description
Technical Field
The invention relates to the technical field of communication, in particular to an ultra-computing power scheduling server and a resource management method.
Background
With the gradual transition of enterprise application environments to Internet network distributed computing environments, enterprise-level Web applications exhibit characteristics such as complexity, dynamics and the like, and higher requirements are put forward on the optimization of the performance of Web application servers located in a middleware layer, wherein the utilization and scheduling of resources are key problems affecting the performance of the servers. The load of the Web system is increasing, a large number of requests with complex tasks are sent to the server every day, and a large number of concurrent data accesses put higher demands on the management of the Web application server and the database service. The response time of the load is reasonably improved, excessive pressure is not caused to the Web application server, and the method becomes a key for improving the overall performance of the Web system; therefore, it becomes particularly important to invent a super computing power scheduling server and a resource management method.
The existing computing power scheduling server and resource management method have larger use limitation and slow scheduling efficiency of server resources; in addition, the existing calculation power scheduling server and resource management method require manual operation of staff to adjust scheduling strategies, use difficulty is low, and scheduling strategy formulation efficiency is low; therefore, we propose a super computing power scheduling server and a resource management method.
Disclosure of Invention
The invention aims to solve the defects in the prior art and provides an supercomputing power scheduling server and a resource management method.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
a super computing power dispatching server and a resource management method, the management method comprises the following specific steps:
(1) Planning a server resource scheduling path;
(2) Constructing a classification map library and classifying and integrating each data resource;
(3) Constructing and updating an original scheduling strategy according to the demands of staff;
(4) And recording the scheduling information and performing risk analysis on the scheduling information and simultaneously optimizing the performance of the platform.
As a further aspect of the present invention, the specific steps of scheduling the path in the step (1) are as follows:
step one: collecting information of each server node, calculating data transmission rate of each server, adding the server node which completes data transmission to a node queue of a transmission track, and adding the server node which does not complete data transmission to an alternative queue;
step two: representing the set of all transmission tracks of each group of servers as a population, generating a population matrix by combining a genetic algorithm, randomly selecting two groups of individuals from the population matrix, selecting a certain path from the two groups of individuals respectively, and exchanging to obtain two new groups of individuals;
step three: and randomly selecting a group of individuals, randomly selecting two sections of paths in the individuals for exchanging to perform path optimization, traversing each node from the end point of the path, if a certain node can be connected with the starting point in a barrier-free manner, determining that the node between the starting point and the node is a redundant node, deleting the redundant nodes after the redundant node is confirmed, recalculating the fitness function of the path, and continuously optimizing the resource scheduling path through continuous iteration.
As a further aspect of the present invention, the step (2) of classification integration specifically includes the following steps:
step (1): the method comprises the steps that a classification map library obtains data labels of all servers as knowledge ranges, then obtains classification knowledge data in the Internet at the same time, extracts all groups of characteristic keywords, and converts all the characteristic keywords into word vectors at the same time;
step (2): constructing a TransD model to receive related data, using a transfer vector to represent an origin to information embedded vector in a space, and measuring the preference degree of specific information through transfer and Euclidean distance of the information;
step (3): constructing a topic text knowledge subgraph by a TransD model, extracting the relation between the subgraph and an entity according to the entity, adopting a knowledge graph embedding model to learn, taking the learned entity vector as the input of a CNN layer, outputting a corresponding entity table and a relation table, then installing and configuring a Neo4j database, and simultaneously starting Neo4j service and importing data to complete knowledge graph construction;
step (4): and classifying the server resources according to the constructed classification map library, screening out repeated data resources, and integrating the data of each group into corresponding classification labels according to the tree distribution form.
As a further scheme of the present invention, the specific step of updating the scheduling policy in the step (3) is as follows:
step I: collecting original scheduling strategy running information as a sample data set, then counting the average value of the sample data set, acquiring the standard deviation of the sample data set according to the calculated average value, and rejecting the data in the sample data set according to the standard deviation;
step II: carrying out standardization processing on the residual data, carrying out normalization processing on each group of processed data, taking the processed data as a training set, constructing a group of convolutional neural networks, carrying out assignment on parameter setting vectors of the convolutional neural networks, and determining the number of neurons of each neural network layer;
step III: inputting the training set into an input layer of a neural network, determining a central vector value to obtain a linear combination of which the output layer is a hidden node output, defining an energy function of multi-round learning of the convolutional neural network by adopting a least square recursion method, and ending the training process and outputting a scheduling update model when the energy function value is smaller than a target error, otherwise, continuing training;
step IV: and determining optimal parameters of a scheduling update model, then, in the scheduling update model, generating a corresponding scheduling strategy by processing the scheduling update model through input, convolution, pooling, full connection and output, and feeding back the scheduling strategy to the staff for checking.
As a further scheme of the invention, the standard deviation in the step I is specifically calculated as follows:
wherein v is n For the data deviation of the sample dataset, s is the standard deviation, if any data x i Deviation v of (2) n Satisfy |v n |>3 sigma, judging the data as abnormal data, and eliminating.
As a further aspect of the present invention, the risk analysis in step (4) specifically includes the following steps:
step (1): the method comprises the steps of deploying relevant information acquisition plug-ins on scheduling platforms of different systems or acquiring scheduling information recorded in the scheduling platforms of different systems through a syslog server, screening out the scheduling information meeting preset conditions of staff, and then processing the remaining scheduling information into uniform-format scheduling information;
step (2): matching the user operation behavior recorded in the processed scheduling information with the abnormal behavior characteristics, generating corresponding alarm information according to the matching result, calculating the risk scores of the alarm information, outputting the calculation result, feeding back the alarm information to related staff, interrupting the related operation process, and recording and feeding back the IP address of related equipment and the user information.
As a further scheme of the invention, the specific steps of the platform performance optimization in the step (6) are as follows:
the first step: generating a starting linked list for each group of functional interfaces of the dispatching platform, and further linking each group of starting linked lists according to the number of times of each accessed group from small to large in sequence of the LRU linked list;
and a second step of: according to the interactive information of each group of functional interfaces, updating data of each group of pages in each group of starting linked lists in real time, sequentially selecting the functional interface starting linked list with the least accessed times from the head of the LRU linked list to select the victim page, and stopping until enough victim pages are recovered;
and a third step of: combining the selected victim page into a block and marking, waking up a compression driver to analyze the marked block, obtaining a physical page belonging to the block, copying the physical page into a buffer area, then calling a compression algorithm to compress the physical page in the buffer area into a compression block, and storing the compression block into a compression area of a performance optimization module.
Compared with the prior art, the invention has the beneficial effects that:
1. according to the super calculation power scheduling server and the resource management method, server nodes which finish data transmission are added to a node queue of transmission tracks, server nodes which do not finish data transmission are added to alternative queues, all transmission tracks of all groups of servers are collected to be expressed as a population, a population matrix is generated by combining a genetic algorithm, two groups of individuals are randomly selected from the population matrix, a certain section of path is selected from the two groups of individuals respectively, then the two new groups of individuals are obtained by exchanging, a group of individuals are randomly selected, the two sections of paths in the individuals are randomly selected for exchanging to perform path optimization, then each node is traversed from a path end point, if a node can be connected with a starting point without obstacle, the node in the middle of the starting point is a redundant node, after the redundant node is confirmed, the redundant nodes are deleted, the adaptability function of the path is recalculated, the resource scheduling path is continuously optimized by continuous iteration, the use limitation of the path is continuously optimized by the genetic algorithm, and the resource scheduling efficiency of the server can be greatly reduced.
2. The super-calculation power scheduling server and the resource management method are characterized in that original scheduling strategy operation information is collected to be used as a sample data set, standard deviation of the sample data set is obtained, data in the sample data set is removed according to the standard deviation, the remaining data is preprocessed and then used as a training set, the training set is input into a neural network input layer, a central vector value is determined to obtain linear combination of output layer as hidden node output, a least square recursive method is adopted to define an energy function of multi-cycle learning of the convolutional neural network, when the energy function value is smaller than a target error, a training process is ended and a scheduling update model is output, otherwise, training is continued, optimal parameters of a scheduling update model are determined, then corresponding scheduling strategies are generated after the scheduling update model is subjected to input, convolution, pooling, full connection and output processing, the parameters can be modeled by oneself, manual operation of a worker is not needed, the use difficulty is reduced, meanwhile, resource scheduling management efficiency is improved, and the efficiency of scheduling strategy establishment is improved.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention.
Fig. 1 is a flow chart of a super computing power dispatching server and a resource management method provided by the invention.
Detailed Description
Example 1
Referring to fig. 1, a super computing power scheduling server and a resource management method, the management method specifically comprises the following steps:
planning a server resource scheduling path.
Specifically, information of each server node is collected, meanwhile, data transmission rate of each server is calculated, then server nodes which finish data transmission are added to a node queue of transmission tracks, server nodes which do not finish data transmission are added to an alternative queue, all transmission tracks of each group of servers are collected to be a population, a population matrix is generated by combining a genetic algorithm, meanwhile, two groups of individuals are randomly selected from the population matrix, a certain section of paths are selected from the two groups of individuals respectively, then exchange is conducted to obtain new two groups of new individuals, a group of individuals are randomly selected, exchange is conducted to the two sections of paths in the individuals at random, path optimization is conducted, then each node is traversed from a path end point, if a certain node can be connected with a starting point in a barrier-free mode, the node between the starting point and the node is a redundant node, after the redundant node is confirmed, the redundant nodes are deleted, the adaptability function of the paths is recalculated, and the resource scheduling paths are continuously optimized through continuous iteration.
And constructing a classification map library and classifying and integrating each data resource.
Specifically, the classification map library acquires data labels of all servers as a knowledge range, then acquires classified knowledge data in the Internet at the same time, extracts all groups of characteristic keywords, simultaneously converts all the characteristic keywords into word vectors, then constructs a TransD model to receive related data, uses the refer vector to represent an origin to information embedded vector in a space, simultaneously measures the preference degree of specific information through refer and Euclidean distance of the information, constructs a topic text knowledge subgraph, learns by adopting a knowledge map embedded model according to the relation connected with the entity in an entity extraction subgraph, takes the learned entity vector as the input of a CNN layer, outputs a corresponding entity table and a relation table, then installs and configures a Neo4j database, simultaneously starts Neo4j service and imports data to complete knowledge map construction, classifies all server resources according to the constructed classification map library, screens out repeated data resources, and integrates all the groups of data into the corresponding classification labels according to a tree distribution mode.
Example 2
Referring to fig. 1, a super computing power scheduling server and a resource management method, the management method specifically comprises the following steps:
and constructing and updating the original scheduling strategy according to the demands of the staff.
Specifically, the original scheduling policy running information is collected as a sample data set, the average value of the sample data set is counted, the standard deviation of the sample data set is obtained according to the calculated average value, the data in the sample data set is removed according to the standard deviation, the rest data is standardized, the processed data are normalized, the processed data are used as a training set, a group of convolutional neural networks are constructed, the parameter setting vectors of the convolutional neural networks are assigned, the neuron number of each neural network layer is determined, the training set is input into the neural network input layer, the central vector value is determined to obtain the linear combination of the output layer as the hidden node output, the least square recursion method is adopted to define the energy function of the convolutional neural network multi-round learning, when the energy function value is smaller than the target error, the training process is finished, the scheduling update model is output, otherwise, training is continued, the optimal parameters of the scheduling update model are determined, the scheduling update model is required by workers, the scheduling update model is checked through input, convolution, pooling, full connection and output processing, the corresponding scheduling policy is generated, and the corresponding scheduling policy is fed back to workers.
In this embodiment, the specific calculation formula of the standard deviation is as follows:
wherein v is n For the data deviation of the sample dataset, s is the standard deviation, if any data x i Deviation v of (2) n Satisfy |v n |>3 sigma, judging the data as abnormal data, and eliminating.
And recording the scheduling information and performing risk analysis on the scheduling information and simultaneously optimizing the performance of the platform.
Specifically, relevant information acquisition plug-ins are deployed on scheduling platforms of different systems, or scheduling information recorded in the scheduling platforms of different systems is acquired through a syslog server, scheduling information meeting preset conditions of workers is screened out, then the remaining scheduling information is processed into uniform-format scheduling information, user operation behaviors recorded in the processed scheduling information are matched with abnormal behavior features, corresponding alarm information is generated according to the matching result, meanwhile risk scores of the alarm information are calculated and calculated, the calculation result is output, the alarm information is fed back to the relevant workers, the relevant operation process is interrupted, and meanwhile the IP address of relevant equipment and the user information are recorded and fed back.
Specifically, a starting chain table is generated for each group of functional interfaces of a dispatching platform, and the LRU chain table sequence, each group of starting chain tables is further linked according to the number of times of access, each group of pages in each group of starting chain tables are updated in real time according to the interaction information of each group of functional interfaces, the functional interface starting chain table with the minimum number of times of access is sequentially selected from the head of the LRU chain table to conduct victim page selection until enough victim pages are recovered, the selected victim pages are combined into a block and marked, then a compression driver is awakened to analyze the marked block, physical pages belonging to the block are obtained, the physical pages are copied into a buffer zone, then a compression algorithm is called to compress the physical pages in the buffer zone into a compression block, and the compression block is stored into a compression zone of a performance optimization module.
Claims (7)
1. The super computing power dispatching server and the resource management method are characterized in that the management method comprises the following specific steps:
(1) Planning a server resource scheduling path;
(2) Constructing a classification map library and classifying and integrating each data resource;
(3) Constructing and updating an original scheduling strategy according to the demands of staff;
(4) And recording the scheduling information and performing risk analysis on the scheduling information and simultaneously optimizing the performance of the platform.
2. The supercomputing power scheduling server and resource management method according to claim 1, wherein the scheduling path planning in the step (1) specifically comprises the following steps:
step one: collecting information of each server node, calculating data transmission rate of each server, adding the server node which completes data transmission to a node queue of a transmission track, and adding the server node which does not complete data transmission to an alternative queue;
step two: representing the set of all transmission tracks of each group of servers as a population, generating a population matrix by combining a genetic algorithm, randomly selecting two groups of individuals from the population matrix, selecting a certain path from the two groups of individuals respectively, and exchanging to obtain two new groups of individuals;
step three: and randomly selecting a group of individuals, randomly selecting two sections of paths in the individuals for exchanging to perform path optimization, traversing each node from the end point of the path, if a certain node can be connected with the starting point in a barrier-free manner, determining that the node between the starting point and the node is a redundant node, deleting the redundant nodes after the redundant node is confirmed, recalculating the fitness function of the path, and continuously optimizing the resource scheduling path through continuous iteration.
3. The supercomputing power scheduling server and resource management method according to claim 1, wherein the classification integration in the step (2) specifically comprises the following steps:
step (1): the method comprises the steps that a classification map library obtains data labels of all servers as knowledge ranges, then obtains classification knowledge data in the Internet at the same time, extracts all groups of characteristic keywords, and converts all the characteristic keywords into word vectors at the same time;
step (2): constructing a TransD model to receive related data, using a transfer vector to represent an origin to information embedded vector in a space, and measuring the preference degree of specific information through transfer and Euclidean distance of the information;
step (3): constructing a topic text knowledge subgraph by a TransD model, extracting the relation between the subgraph and an entity according to the entity, adopting a knowledge graph embedding model to learn, taking the learned entity vector as the input of a CNN layer, outputting a corresponding entity table and a relation table, then installing and configuring a Neo4j database, and simultaneously starting Neo4j service and importing data to complete knowledge graph construction;
step (4): and classifying the server resources according to the constructed classification map library, screening out repeated data resources, and integrating the data of each group into corresponding classification labels according to the tree distribution form.
4. The supercomputing power scheduling server and resource management method according to claim 3, wherein the specific step of updating the scheduling policy in the step (3) is as follows:
step I: collecting original scheduling strategy running information as a sample data set, then counting the average value of the sample data set, acquiring the standard deviation of the sample data set according to the calculated average value, and rejecting the data in the sample data set according to the standard deviation;
step II: carrying out standardization processing on the residual data, carrying out normalization processing on each group of processed data, taking the processed data as a training set, constructing a group of convolutional neural networks, carrying out assignment on parameter setting vectors of the convolutional neural networks, and determining the number of neurons of each neural network layer;
step III: inputting the training set into an input layer of a neural network, determining a central vector value to obtain a linear combination of which the output layer is a hidden node output, defining an energy function of multi-round learning of the convolutional neural network by adopting a least square recursion method, and ending the training process and outputting a scheduling update model when the energy function value is smaller than a target error, otherwise, continuing training;
step IV: and determining optimal parameters of a scheduling update model, then, in the scheduling update model, generating a corresponding scheduling strategy by processing the scheduling update model through input, convolution, pooling, full connection and output, and feeding back the scheduling strategy to the staff for checking.
5. The supercomputing power scheduling server and the resource management method according to claim 4, wherein the standard deviation in the step i is specifically calculated as follows:
wherein v is n For the data deviation of the sample dataset, s is the standard deviation, if any data x i Deviation v of (2) n Satisfy v n >3 sigma, judging the data as abnormal data, and eliminating.
6. The supercomputing power scheduling server and resource management method according to claim 1, wherein the risk analysis in the step (4) specifically comprises the following steps:
step (1): the method comprises the steps of deploying relevant information acquisition plug-ins on scheduling platforms of different systems or acquiring scheduling information recorded in the scheduling platforms of different systems through a syslog server, screening out the scheduling information meeting preset conditions of staff, and then processing the remaining scheduling information into uniform-format scheduling information;
step (2): matching the user operation behavior recorded in the processed scheduling information with the abnormal behavior characteristics, generating corresponding alarm information according to the matching result, calculating the risk scores of the alarm information, outputting the calculation result, feeding back the alarm information to related staff, interrupting the related operation process, and recording and feeding back the IP address of related equipment and the user information.
7. The supercomputing power scheduling server and resource management method according to claim 1, wherein the specific step of optimizing the platform performance in the step (6) is as follows:
the first step: generating a starting linked list for each group of functional interfaces of the dispatching platform, and further linking each group of starting linked lists according to the number of times of each accessed group from small to large in sequence of the LRU linked list;
and a second step of: according to the interactive information of each group of functional interfaces, updating data of each group of pages in each group of starting linked lists in real time, sequentially selecting the functional interface starting linked list with the least accessed times from the head of the LRU linked list to select the victim page, and stopping until enough victim pages are recovered;
and a third step of: combining the selected victim page into a block and marking, waking up a compression driver to analyze the marked block, obtaining a physical page belonging to the block, copying the physical page into a buffer area, then calling a compression algorithm to compress the physical page in the buffer area into a compression block, and storing the compression block into a compression area of a performance optimization module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310453599.3A CN116541166A (en) | 2023-04-25 | 2023-04-25 | Super-computing power scheduling server and resource management method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310453599.3A CN116541166A (en) | 2023-04-25 | 2023-04-25 | Super-computing power scheduling server and resource management method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116541166A true CN116541166A (en) | 2023-08-04 |
Family
ID=87451590
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310453599.3A Pending CN116541166A (en) | 2023-04-25 | 2023-04-25 | Super-computing power scheduling server and resource management method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116541166A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117272838A (en) * | 2023-11-17 | 2023-12-22 | 恒海云技术集团有限公司 | Government affair big data platform data acquisition optimization method |
-
2023
- 2023-04-25 CN CN202310453599.3A patent/CN116541166A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117272838A (en) * | 2023-11-17 | 2023-12-22 | 恒海云技术集团有限公司 | Government affair big data platform data acquisition optimization method |
CN117272838B (en) * | 2023-11-17 | 2024-02-02 | 恒海云技术集团有限公司 | Government affair big data platform data acquisition optimization method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7169369B2 (en) | Method, system for generating data for machine learning algorithms | |
CN111611488B (en) | Information recommendation method and device based on artificial intelligence and electronic equipment | |
Ghodousi et al. | Analyzing public participant data to evaluate citizen satisfaction and to prioritize their needs via K-means, FCM and ICA | |
CN112685504A (en) | Production process-oriented distributed migration chart learning method | |
CN112989059A (en) | Method and device for identifying potential customer, equipment and readable computer storage medium | |
CN112100372B (en) | Head news prediction classification method | |
JP7172612B2 (en) | Data expansion program, data expansion method and data expansion device | |
CN116541166A (en) | Super-computing power scheduling server and resource management method | |
CN110830291A (en) | Node classification method of heterogeneous information network based on meta-path | |
CN113128667A (en) | Cross-domain self-adaptive graph convolution balance migration learning method and system | |
CN112215655A (en) | Client portrait label management method and system | |
CN114265954B (en) | Graph representation learning method based on position and structure information | |
CN110705889A (en) | Enterprise screening method, device, equipment and storage medium | |
CN116227989A (en) | Multidimensional business informatization supervision method and system | |
CN116187675A (en) | Task allocation method, device, equipment and storage medium | |
CN112581177B (en) | Marketing prediction method combining automatic feature engineering and residual neural network | |
CN113835739A (en) | Intelligent prediction method for software defect repair time | |
Si | Classification Method of Ideological and Political Resources of Broadcasting and Hosting Professional Courses Based on SOM Artificial Neural Network | |
CN109308565B (en) | Crowd performance grade identification method and device, storage medium and computer equipment | |
CN113742495B (en) | Rating feature weight determining method and device based on prediction model and electronic equipment | |
CN115982646B (en) | Management method and system for multisource test data based on cloud platform | |
CN113792163B (en) | Multimedia recommendation method and device, electronic equipment and storage medium | |
CN117706954B (en) | Method and device for generating scene, storage medium and electronic device | |
CN112104467B (en) | Cutover operation risk rating method and device and computing equipment | |
US20230376796A1 (en) | Method and system for knowledge-based process support |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |