CN101609416A - Improve the method for performance tuning speed of distributed system - Google Patents

Improve the method for performance tuning speed of distributed system Download PDF

Info

Publication number
CN101609416A
CN101609416A CNA2009100882256A CN200910088225A CN101609416A CN 101609416 A CN101609416 A CN 101609416A CN A2009100882256 A CNA2009100882256 A CN A2009100882256A CN 200910088225 A CN200910088225 A CN 200910088225A CN 101609416 A CN101609416 A CN 101609416A
Authority
CN
China
Prior art keywords
parameter
response time
groups
performance
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2009100882256A
Other languages
Chinese (zh)
Other versions
CN101609416B (en
Inventor
曹军威
张帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CERTUSNET CORP
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN2009100882256A priority Critical patent/CN101609416B/en
Publication of CN101609416A publication Critical patent/CN101609416A/en
Application granted granted Critical
Publication of CN101609416B publication Critical patent/CN101609416B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The method that improves performance tuning speed of distributed system belongs to Distributed Computer System performance optimization technology category, it is characterized in that, comprise web page server at one, apps server and database server are in the interior distributed system of being made up of three stratum servers, the parameter quantification of span is unified parameter sets having separately, obtain parameter sets by uniform sampling, have two with one again and hide layer, each node all is the three-layer neural network model training of logistic function, repeated sampling is tested new parameter and is obtained coarse response time and coarse throughput in neural network then, again based on the requirement of system function optimization, determine the ordering performance curve and the noise grade of this neural network, separate thereby require the parameter of acquisition regression function to draw our optimized parameter of needs according to the correlated performance of setting at last.The invention has the advantages that and also reduced the test duration when having promoted system performance.

Description

Improve the method for performance tuning speed of distributed system
Technical field
The invention belongs to computing power and optimize the field, specifically belong to the performance optimization field of three stratum servers (web page server, apps server and database server) system.
Background technology
Along with the continuous lifting for application demand of being on the increase of online user's quantity, brought many challenges for distributed system performance tuning field with the user.Cluster calculating, grid computing, match hundred platforms all are to utilize the complicated distributed system to satisfy the different demands of each field such as individual, commerce, government, each level.Nowadays, extensively praised highly, be devoted to change fully the cloud computing technology of existing computer run pattern and arise at the historic moment by industry member and academia.Its core concept is that data are placed on the huge server of a slice " cloud " backstage, the terminal user on foreground only need one easily browser access anywhere or anytime easily be placed on all resources on " cloud " backstage to oneself.After technological thought was born, Microsoft, Google, IBM, Amazon etc. fell over each other to try to win the champion, numerous and confused cloud computing product and the technology of releasing oneself.At home, China Mobile Communications Research Institute also gives data center's structure of mobile phone access under the research cloud computing.Make a general survey of the development trend of whole technique, the foundation of the server system at the mass data center the most tangible trend that is these famous enterprises in market competition from now on.In this simultaneously, each large-scale commerce website, as Amazon, all there is the server cluster system of oneself in the Taobao of ebay and China etc.The Performance tuning of system is the topic that each big esbablished corporation, academic conference and research journal etc. are paid close attention to always.
System adjustment and optimization has soft and rigid two aspects.Rigid mainly being at particular type of using and the application scale of facing, the hardware device group system that selection adapts with it goes up the solution imbalance between supply and demand in " amount ".We can clearly see and draw, and only depend on the more hardware device of input to bring huge financial burden to enterprise, are a kind of modes of lifting performance of extensive style.And many times, the lifting amplitude that the input harvesting system performance of hardware aspect is depended in the performance boost of system alone is smaller.So intensive performance boost mode promptly comes the Adjustment System performance to become the research focus of present stage from soft angle.Problem promptly is, how under the situation of limited resources, thereby to meet consumers' demand be a problem that challenge is more arranged to the elevator system performance.It is investigated index and mainly contains two kinds:
One. the average throughput of system, the i.e. quantity of unit interval average treatment request.
Two. the average response time of system, promptly average each request processing time.
In the research field of soft tuning, wherein for the comprehensive adjustment of system configuration parameter (as Session Time, maximum thread amount, maximum number of connections amount, the size of Buffer Pool etc.).It is simple to operate, with low cost and effect is obvious to be mainly reflected in it.Traditional rigid tuning mainly is thought of as the configuration of parameters optimization the optimization problem of a black box (Black Box), adopts methods such as evolutionary computation to carry out tuning then.Have the scholar to propose to use recently and climb hill method (Small HillClimbing) algorithm, it makes full use of gradient information and optimization result in the past, and simulation result obviously is better than traditional simulated annealing and iterative search algorithm at random.Also there is the scholar to utilize covariance matrix algorithm (Covariance MatrixAlgorithm) in proposing, not only adopts breadth first search (Exploration) during the digging system performance, and in conjunction with the method for deep search (Exploitation).Test result has promoted 3% performance than climbing the hill method, is one of efficient ways in the cutting edge technology that occurs in the present document.In this simultaneously, the checkpoint technology also is suggested recently and is applied in the system performance lifting, and it mainly is to utilize the state of the each system of record to come analytic system in this " checkpoint " performance in the moment.More than each class methods each has something to recommend him, equal overall performances of elevator system to a certain extent, common weak point is that its Measuring Time is long.Be worth with test though it possesses good theoretical foundation, its using value that is generalized in the commerce is very restricted.
In the face of the expansion of parameter space and quite long test duration problem, also there is the scholar to propose to utilize prognoses system performance mode elevator system performance to reduce simulation time simultaneously recently.Its given test findings has reduced Measuring Time really and has promoted system performance, but in its example that provides, the search volume is very suitable for discrete situation, but is generalized to after the continuous space, and problem will show especially out.Because its search volume is effectively for specific parameter configuration, but its search capability has been subjected to certain restriction.
Under this application demand, whether can design a cover system, can significantly promote the performance of distributed computing system, do not need a large amount of thermometrically time to become a current important issue again.
Summary of the invention
The objective of the invention is to provides a cover efficient ways for distributed system performance.Not only greatly reduce the response time, improved throughput; The time that simultaneously also can reduce tuning.
The invention is characterized in that described method is a kind of web page server that comprises, apps server and database server are in the tuning method of the interior distributed system of being made up of three stratum servers.
Test of the present invention is carried out on three layers of distributed system.Ground floor web Server (web page server) layer, mainly be to show HTML/XML, the second layer is Application Server (apps server) layer, the 3rd layer is DatabaseServer (database server), we land by application system user (asu), list work, the operation flow that selection work, execution work are returned at last.What most of basically User logins were carried out behind the OA to office all is similar work.System produces 410 users simultaneously and comes a repoussage system, makes the actual performance of system to embody.Come the parameter of system is adjusted by contrast preface optimization and covariance matrix algorithm.We also can find out by test result, and preface optimization is more or less the same with the covariance matrix algorithm on the integrated testability effect, but on simulation time, preface optimization can be saved the test duration over half.
Description of drawings
Fig. 1. program flow chart of the present invention.
Fig. 2. the type map of ordering performance curve of the present invention (Bell type curve map).
Fig. 3. the structural drawing of system and workflow diagram:
The structural drawing of Fig. 3 .1 system,
The workflow diagram of Fig. 3 .2 system.
Fig. 4. covariance matrix algorithm (CMA) and preface optimization (OO) throughput comparison diagram (solid line: CMA, dotted line: OO).
Fig. 5. covariance matrix algorithm (CMA) and preface optimization (OO) response time comparison diagram (solid line: CMA, dotted line: OO).
Fig. 6. covariance matrix algorithm (CMA) and preface optimization (OO) test period comparison diagram (solid line: CMA, dotted line: OO).
Fig. 7. the neural network block diagram of rough model.
Embodiment
Prerequisite: the distributed computing system of supposing our present tuning contains n parameter { p 1, p 2..., p n, parameter p iSpan be: [ p i, p i], algorithm is as follows:
(1) with all [ p i, p i] equal interval quantizing is in [0,100].
(2) get 200 groups of parameter { v in [0,100] uniformly at random 1,1, v 2,1..., v N, 1{ v 12, v 22..., v N2..., { v 1,200, v 2,200..., v N, 200And with its equal interval quantizing get back to original interval [ p i, p i].
(3) response time is established in the output under the above-mentioned 200 groups of parameters of test and throughput is respectively (t 1,1, r 2,1), (t 1,2, r 2,2) ..., (t 1,200, r 2,200) will test 200 groups output results are used for training has two three-layer neural network models (principle of its model and analytic process see Appendix 1, the using method of the 111st page of neural network model of Neural Network chapter 4 that also visible Simon Haykin writes) of hiding layer.The logistic function that each node of neural network adopts.In matlab software, have special neural network model tool box (beginning->tool box->neural network->the neural network instrument), in the tool box directly with above 200 groups of input and output results as training data, click beginning and just can train this neural network model.
This model only need calculate once and get final product.Repeating step (1) and (2), choose again 200 groups of parameters v ' 1,1, v ' 2,1..., v ' N, 1, v ' 12, v ' 22..., v ' N2..., v ' 1,200, v ' 2,200..., v ' N, 200As sample, test with the neural network among the Fig. 1 that has trained, obtain exporting the result (t ' 1,1, r ' 2,1), (t ' 1,2, r ' 2,2) ..., (t ' 1,200, r ' 2,200).This part result is referred to as coarse mould throughput and coarse response time respectively.
(4) use coarse mould throughput or coarse response time to estimate OPC (OrderedPerformance Curve, the ordering back performance curve) type (Flat, U-Shaped, Neutral, Bell or Steep) of the problem of system.Can only replace system performance analysis with the coarse response time this moment.We extract 200 groups of coarse response times obtaining in (3) r ' 1,1, r ' 1,2..., r ' 1,200, it is sorted, suppose coarse response time of ordering back arranging according to ascending order for r ' 1, [1], r ' 1, [2]..., r ' 1, [200], establish its corresponding parameters be configured to v ' 1, [1], v ' 2, [1]..., v ' N, [1], v ' 1[2], v ' 2[2]..., v ' N[2]..., v ' 1, [200], v ' 2, [200]..., v ' N, [200].Calculate the OPC type by the following method
Substep (1): for i response time r ' 2, [i], calculate its y i=(r ' 1[i]-r ' 1, [1])/(r ' 1, [200]-r ' 1, [1]).
Substep (2): for i response time r ' 2, [i], calculate its x i=(i-1)/(200-1)=(i-1)/199.
The all i of traversal so just can obtain 200 to (x in [1,200] i, y i), we remember that it is A (x), and it is drawn on the coordinate axis has just obtained the OPC type of problem.Its horizontal ordinate x iFor 1 after the normalization to 200, ordinate y iNormalization ordering for performance.As shown in Figure 2.In experiment repeatedly, problem types is the Bell type of standard, says something itself to belong to the Bell type.
(5) determine noise grade.200 groups of outputs that obtain in (3) are averaged, that is:
( t ‾ , r ‾ ) = ( Σ i = 1 200 t 1 , i / 200 , Σ i = 1 200 r 2 , i / 200 ) As output, calculate the noise grade of t and r then respectively, be N t=| max (t 1, i-t) |/[max (t 1, i)-min (t 1, i)] and N r=| max (r 2, i-r) |/[max (r 2, i)-min (r 2, i)], i=1 wherein, 2 ..., 200.Get N tAnd N rIn higher value be the noise size, if this value then be big noise type greater than 2.5, if be between 2.5 and 0.5 then be medium noise type, if less than 0.5 then be little noise type.
(6) determine the big or small s that selection is gathered.
Substep (1) is determined the big or small g of " enough good " set.In the preface optimum theory, require us will seek the parameter configuration of the best (g=1), softening for seeking the parameter configuration of " enough good ".That is to say, from seeking " optimum solution " to the process of seeking " satisfactory solution ".Selecting (g=20) in our test is satisfactory solution.Promptly searching out actual performance is arranged in preceding 20 parameter configuration and gets final product.
Substep (2) need to determine the quantity k of alignment point.We have selected actual performance to come preceding 20 to be enough good separating, and we need determine simultaneously, and finding in these 20 what with our method is satisfaction.This quantity is the quantity k of alignment point, and we are provided with it is 5.Promptly finding in the true rank preceding 20 any 5, we are just satisfied.
Substep (3) need to determine the probability α of aligning.Problem in the substep (2) promptly finds any 5 in the truth rank preceding 20, guarantees with probability.We need this probability to be not less than aligning probability α.It is 98% that its value is set.
Substep (4) obtains the space s size that we need take a sample by searching regression function table (this table sees Appendix 2, the 20th page of " the Ordinal Optimization " that also visible Ho Yo-chi etc. writes) then.At first, we determine regression function according to OPC type and the noise grade determined in step (4) and (5) s ( k , g ) = e Z 1 k Z 2 g Z 3 + Z 4 Parameter Z 1, Z 2, Z 3, Z 4Wherein e is 2.71828, and g is 20 of the middle setting of substep (1), and k is 5 of the middle setting of substep (2), and we can directly calculate the size of sampler space s then.
(7) come the parameter configuration of preceding s in the coarse response of test to obtain optimum solution.Find the preceding s of coarse response time in the step (4) parameter configuration v ' 1, [1], v ' 2, [1]..., v ' N, [1], v ' 1[2], v ' 2[2]..., v ' N[2]..., v ' 1, [s], v ' 2, [s]..., v ' N, [s], test the best result of each group parameter acquisition and promptly solved this problem.
We have considered that 7 parameters of system do test.Be respectively [MaxKeepAliveRequests, KeepAliveTimeOut, ThreadCacheSize, MaxInactiveInterval, MaxConnections, KeyBufferSize, SortBufferSize].Its actual physical significance is respectively [the maximum number of connection that keeps request, the time that maintenance connects, the size of thread Buffer Pool, the time interval of maximum no activity request, maximum number of connections amount, crucial buffer size, an ordering buffer area size].These parameters can be improved the entire system performance by adjusting its value after system is by pressure test.Their default value is [100,5s, 8,2s, 400,10M, 256K].These parameter range are [10,500] * [10s, 200s] * [5,100] * [5s, 50s] * [100,500] * [8M, 64M] * [128K, 1024K].Span is to decide according to our particular type of experiment, so default value might not be within the scope that we choose.The value space of some parameter depends on application type, such as ThreadCacheSize and MaxInactiveInterval.Some parameter value scope depends on the characteristic of server, such as SortBufferSize, and meeting allocation space when MySQL need rebuild index, big value sometimes can cause the decline of system performance in application.So say to a certain extent, the value of system depends on application itself, also depends on the type that we choose testing server.
We have produced 410 Virtual User and have tested this three-tier system.In order to make system testing can meet real scene more, we do not produce all Virtual User simultaneously.On the contrary, we adopt 60 seconds the continuous number of users that increases interval time.The user who has finished all tasks can log off.It is as follows that we utilize LoadRunner 8 performance numbers of test acquisition system under default parameters then:
Average throughput (request/second): 149.21
The response time (microsecond) of average incident: 501.8
What several factors all can have influence on system is energy, such as network environment, and service end resource utilization, client's task executions time and order or the like.In our experiment, under the just identical at last execution parameter configuring condition, also might cause different The performance test results.So we under specific value, can do the actual performance that pressure test several times averages expression system every group of parameter more.
According to the right data of existing some groups of I/O, we just can train our neural network model to obtain the rough model of native system.In this particular problem, we utilize the three-layer network model of introducing among Fig. 1 to obtain the noise type of system simultaneously.Find in the test that this problem is the strong noise type problem, its reason is that the uncertain and enchancement factor in the system causes too much.We are in conjunction with problem types (Bell type) then, given enough good disaggregation, our default its be g=100 (in fact, because huger search volume, this value can obtain bigger as required), the enough good quantity of separating that needs to aim at is k=15 (can adjust according to problem scale equally), aim at probability and be set to α=98%, by the search relationship table of comparisons, we obtain to select data acquisition size for s (k, g)=20.
Experiment launches under above-mentioned three kinds of performance evaluation indexs, that is: the response time of average throughput, unit interval http response number, average incident.Test result can save design sketch referring to next.
In test 21, CMA obtains optimum solution X CMA=[126,28.19,16,29,89,136,323.51] T, its throughput and response time are respectively t (X CMA)=185.73 (request/second), r (X CMA)=361 (microsecond), respectively than the result optimizing under the default parameters 24.5% and 28.1%.Preface optimization obtains optimum solution the 4th test, and its parameter is X 00=[194,46.28,46,35,101,188,445.95] T, its throughput and response time are respectively t (X 00)=156.6 (request/second) and r (X 00)=392 (microsecond), respectively than the result optimizing under the default parameters 5.0% and 21.9%.
Preface optimization method and present existing covariance matrix algorithm are compared, and we can find out from lab diagram that the covariance matrix algorithm is as a kind of evolution algorithm, and itself is stable especially for particular problem.It has adopted degree of depth exploration (Exploitation) and range to explore method that (Exploration) combine and has made the easier satisfactory solution that finds of system.In fact, the same with most of heuritic approaches, thereby being easy to be absorbed in locally optimal solution exactly, the drawback of the method make algorithm finish.Preface optimization then more is to utilize global information (promptly utilizing the problem types of neural network and noise information estimating system and the distribution of satisfactory solution), thereby can better grasp global information.But the shortcoming of preface optimization is exactly to lack the function that chase the part.But optimize for this type of three-tier system, the scholar proposes it is summarized with a black-box model, describes its complicated local characteristics exactly, and in this case, the advantage of preface optimization just can embody.In the experiment, we have adopted the CMA algorithm of (4,9), have passed through after 5 iteration, and system has converged to optimum solution.We do test with preface optimization under same environment, as can be seen, on actual performance, certain stability (fluctuating range among the figure is more much bigger than CMA algorithm) has been lost in preface optimization, but combination property is but very nearly the same with the CMA algorithm.
Simultaneously, test period also is the aspect that we consider.The test duration of preface optimization comprises the time of setting up the neural network rough model, the time of definition OPC type and noise grade, the time of tabling look-up and obtaining the selection set sizes.Because obtaining of these times only needs once to test and can obtain, the time of its relative system test can be ignored.We consider the test duration of real system then.We test at every turn and have repeated iteration 50 times, have done 400 experiments altogether.
In Fig. 6 time table of comparisons, can see that drawing preface optimization can reduce some experimental period and reach 75%, on average to get off, preface optimization can reduce by 40% experimental period than CMA algorithm, and this can confirm the benefit that preface is optimized in the multilayer distributed system of rapid Optimum.
Table 1: regression function value of Z 1, Z 2, Z 3, Z 4In Z (k, g)
Figure G2009100882256D00071
Figure G2009100882256D00081

Claims (1)

1, improves the method for performance tuning speed of distributed system, it is characterized in that described method is a kind of web page server that comprises, apps server and database server are realized according to following steps in described server zone in the tuning method of the interior distributed system of being made up of three stratum servers:
Step (1) initialization
Set N parameter p 1, p 2..., p NN=7, described seven parameters are: the maximum number of connection MaxKeepAliveRequest that keeps request, the time KeepAliveTimeOut that keeps connection, the big or small ThreadCacheSize of thread Buffer Pool, the time interval MaxInactiveInternal of maximum no activity request, maximum number of connections amount MaxConnections, the big or small KeyBufferSize of crucial buffer zone and the big or small SortBufferSize of ordering buffer area, the above each parameter range be assumed to be [ p n, p n] n=1,2 ..., N;
Step (2) is quantized to the value space linearity of a described N parameter in [0,100];
Step (3) is got 200 groups of parameter { v at random equably in described space [0,100] 1,1, v 2,1..., v N, 1, { v 1,2, v 2,2..., v N, 2..., { v 1,200, v 2,200..., v N, 200In, sequence number of the described parameter of first letter representative in the subscript, second letter is represented the sequence number of described parameter group, then above all parameter value equal interval quantizings that obtain are returned original parameter value space [ p n, p n], n=1,2 ..., among the N;
Step (4) is tested the response time and the throughput of above 200 groups of N parameters under true environment, it is expressed as (t respectively 1,1, r 2,1), (t 1,2, r 2,2) ..., (t 1,200, r 2,200), 200 groups of I/O results of test in the step (3) (4) are used for training the three-layer neural network models with two hiding layers, the logistic function that each node of neural network adopts, this neural network model only need be trained once;
Step (5) repeating step (3) is selected 200 groups again, every group N parameter v ' 1,1, v ' 2,1..., v ' N, 1, v ' 12, v ' 22..., v ' N2..., v ' 1,200, v ' 2,200..., v ' N, 200, as being input in the step (4) in the neural network model of determining, obtain one new 200 groups after the equal interval quantizing, the coarse response time t ' of every group of N parameter and coarse throughput r ', be expressed as (t ' 1,1, r ' 2,1), (t ' 1,2, r ' 2,2) ..., (t ' 1,200, r ' 2,200);
Step (6) use the rough model structure response time, the representative system performance was analyzed the time estimate idealized curve OPC after the representative system Performance Characteristics, Ordinal Performance Curve ordering performance curve type according to following steps:
The response time t ' of 200 groups of parameters that step (6.1) obtains step (5) 1,1, t ' 1,2..., t ' 1,200Carry out ascending order and arrange t ' 1, [1], t ' 1, [2]... t ' 1, [200], obtain after corresponding ascending order is arranged 200 groups of parametric representations for v ' 1, [1], v ' 2, [1]..., v ' N, [1], v ' 1, [2], v ' 2, [2]..., v ' N, [2]..., v ' 1, [200], v ' 2, [200]..., v ' N, [200],
Step (6.2) is for i response time t ' 1, [i], calculate y i=(t ' 1, [i]-t ' 1, [1])/(t ' 1, [200]-t ' 1, [i]), y iRepresent the ratio of the difference of i response time and minimum response time with respect to maximum response time and the difference of minimum response time,
Step (6.3) is for i response time t ' 1, [i], calculate x i=(i-1)/(200-1)=(i-1)/199, x iThe difference of representing the sequence number of i response time and the sequence number of minimum response time is with respect to the ratio of the difference of the sequence number of maximum response time and the sequence number of minimum response time,
Step (6.4) with 200 groups of parameters obtaining in step 6.2 and 6.3 to (x i, y i) be designated as y=A (x), and as the longitudinal axis under the rectangular coordinate system, the transverse axis coordinate is normalized to interval 1 to 200 between interval 0 to 1, the ordering performance curve of note problem is the OPC of Bell type, its meaning be in the parameter relatively " good " separate and relatively separating on the position of " bad " be equally distributed;
Step (7) is determined noise grade:
Step (7.1) obtains after 200 groups of response times that obtain in the step (4) and throughput result are averaged ( t ‾ , r ‾ ) = ( Σ i = 1 200 t 1 , i / 200 , Σ i = 1 200 r 2 , i / 200 ) ,
Step (7.2) is pressed the noise grade of following formula difference calculated response time t and throughput r:
The noise grade N of response time tFor: N t=| max (t 1, i-t) |/[max (t 1, i)-min (t 1, i)],
The noise grade N of throughput rFor: N r=| max (r 2, i-r) |/[max (r 2, i)-min (r 2, i)],
Compare N tAnd N rSize, getting the greater is the noise size of system; When selected noise greater than 2.5 the time, be big noise type; Was medium noise type greater than l less than 2.5 o'clock; Less than 0.5 is little noise type;
Step (8) from 200 groups of parametric representations that step (6.1) obtains v ' 1, [1], v ' 2, [1]..., v ' N, [1], v ' 1, [2], v ' 2, [2]..., v ' N, [2]..., v ' 1, [200], v ' 2, [200]..., v ' N, [200]Obtain preceding s enough good set v ' 1, [1], v ' 2, [1]..., v ' N, [1], v ' 1, [2], v ' 2, [2]..., v ' N, [2]..., v ' 1, [s], v ' 2, [s]..., v ' N, [s]}:
The quantity that step (8.1) is set satisfactory solution is g=20, and the aligning probability is α=98%, needs the quantity k=5 that separates that aims in the satisfactory solution, and separating of described aligning is meant in described 20 satisfactory solutions, the quantity of separating that can find,
Step (8.2) is determined regression function s ( k , g ) = e Z 1 k Z 2 g Z 3 + Z 4 In parameter Z 1, Z 2, Z 3, Z 4Size, the determining of above parameter depends on the recurrence sets of factors that 2 recurrence factors are formed: the described ordering performance curve of step (6), the described noise grade of step (7) just can be by searching recurrence sets of factors and the unique definite Z of the parameter table of comparisons 1, Z 2, Z 3, Z 4Size, the quantity of the described satisfactory solution of integrating step (8.1) and aim at the quantity separate is brought into the size that just obtains s in the regression function, constant e=2.71828.
CN2009100882256A 2009-07-13 2009-07-13 Method for improving performance tuning speed of distributed system Active CN101609416B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009100882256A CN101609416B (en) 2009-07-13 2009-07-13 Method for improving performance tuning speed of distributed system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009100882256A CN101609416B (en) 2009-07-13 2009-07-13 Method for improving performance tuning speed of distributed system

Publications (2)

Publication Number Publication Date
CN101609416A true CN101609416A (en) 2009-12-23
CN101609416B CN101609416B (en) 2012-11-14

Family

ID=41483177

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009100882256A Active CN101609416B (en) 2009-07-13 2009-07-13 Method for improving performance tuning speed of distributed system

Country Status (1)

Country Link
CN (1) CN101609416B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103379041A (en) * 2012-04-28 2013-10-30 国际商业机器公司 System detection method and device and flow control method and device
CN103853786A (en) * 2012-12-06 2014-06-11 中国电信股份有限公司 Method and system for optimizing database parameters
CN105512264A (en) * 2015-12-04 2016-04-20 贵州大学 Performance prediction method of concurrency working loads in distributed database
CN105630458A (en) * 2015-12-29 2016-06-01 东南大学—无锡集成电路技术研究所 Prediction method of out-of-order processor steady-state average throughput rate based on artificial neural network
CN105893258A (en) * 2016-03-31 2016-08-24 中电海康集团有限公司 Performance optimizing method and tool based on artificial fish school algorithm
CN106452934A (en) * 2015-08-10 2017-02-22 ***通信集团公司 Analyzing method for network performance index change trend and device for realizing same
CN107681781A (en) * 2017-09-13 2018-02-09 清华大学 A kind of control method of energy router plug and play
CN108733564A (en) * 2018-05-18 2018-11-02 阿里巴巴集团控股有限公司 A kind of browser performance test method, device and equipment
CN109445935A (en) * 2018-10-10 2019-03-08 杭州电子科技大学 A kind of high-performance big data analysis system self-adaption configuration method under cloud computing environment
CN109783219A (en) * 2017-11-10 2019-05-21 北京信息科技大学 A kind of cloud resource Optimization Scheduling and device
CN113099408A (en) * 2021-03-15 2021-07-09 西安交通大学 Simulation-based data mechanism dual-drive sensor node deployment method and system
CN113128659A (en) * 2020-01-14 2021-07-16 杭州海康威视数字技术股份有限公司 Neural network localization method and device, electronic equipment and readable storage medium
CN114662252A (en) * 2022-02-25 2022-06-24 佳木斯大学 Method for improving performance index of complex networked random system

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110245019B (en) * 2019-06-17 2021-07-06 广东金赋科技股份有限公司 Thread concurrency method and device for self-adaptive system resources

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU3115600A (en) * 1998-12-11 2000-06-26 Microsoft Corporation Accelerating a distributed component architecture over a network using a direct marshaling
CN100553230C (en) * 2007-05-21 2009-10-21 中南大学 A kind of collaborative congestion control method that is used for express network

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103379041A (en) * 2012-04-28 2013-10-30 国际商业机器公司 System detection method and device and flow control method and device
US10171360B2 (en) 2012-04-28 2019-01-01 International Business Machines Corporation System detection and flow control
CN103853786A (en) * 2012-12-06 2014-06-11 中国电信股份有限公司 Method and system for optimizing database parameters
CN103853786B (en) * 2012-12-06 2017-07-07 中国电信股份有限公司 The optimization method and system of database parameter
CN106452934A (en) * 2015-08-10 2017-02-22 ***通信集团公司 Analyzing method for network performance index change trend and device for realizing same
CN105512264A (en) * 2015-12-04 2016-04-20 贵州大学 Performance prediction method of concurrency working loads in distributed database
CN105630458B (en) * 2015-12-29 2018-03-02 东南大学—无锡集成电路技术研究所 The Forecasting Methodology of average throughput under a kind of out-of order processor stable state based on artificial neural network
CN105630458A (en) * 2015-12-29 2016-06-01 东南大学—无锡集成电路技术研究所 Prediction method of out-of-order processor steady-state average throughput rate based on artificial neural network
CN105893258A (en) * 2016-03-31 2016-08-24 中电海康集团有限公司 Performance optimizing method and tool based on artificial fish school algorithm
CN107681781A (en) * 2017-09-13 2018-02-09 清华大学 A kind of control method of energy router plug and play
CN109783219A (en) * 2017-11-10 2019-05-21 北京信息科技大学 A kind of cloud resource Optimization Scheduling and device
CN108733564A (en) * 2018-05-18 2018-11-02 阿里巴巴集团控股有限公司 A kind of browser performance test method, device and equipment
CN109445935A (en) * 2018-10-10 2019-03-08 杭州电子科技大学 A kind of high-performance big data analysis system self-adaption configuration method under cloud computing environment
CN109445935B (en) * 2018-10-10 2021-08-10 杭州电子科技大学 Self-adaptive configuration method of high-performance big data analysis system in cloud computing environment
CN113128659A (en) * 2020-01-14 2021-07-16 杭州海康威视数字技术股份有限公司 Neural network localization method and device, electronic equipment and readable storage medium
CN113099408A (en) * 2021-03-15 2021-07-09 西安交通大学 Simulation-based data mechanism dual-drive sensor node deployment method and system
CN114662252A (en) * 2022-02-25 2022-06-24 佳木斯大学 Method for improving performance index of complex networked random system

Also Published As

Publication number Publication date
CN101609416B (en) 2012-11-14

Similar Documents

Publication Publication Date Title
CN101609416B (en) Method for improving performance tuning speed of distributed system
Song et al. A hadoop mapreduce performance prediction method
CN110309603B (en) Short-term wind speed prediction method and system based on wind speed characteristics
CN112686464A (en) Short-term wind power prediction method and device
CN106803799B (en) Performance test method and device
Anitha A new web usage mining approach for next page access prediction
CN104750780B (en) A kind of Hadoop configuration parameter optimization methods based on statistical analysis
CN103308463A (en) Characteristic spectrum area selection method for near infrared spectrum
CN109981749A (en) A kind of cloud workflow task running time prediction method promoted based on limit gradient
CN104199870A (en) Method for building LS-SVM prediction model based on chaotic search
CN110413657B (en) Average response time evaluation method for seasonal non-stationary concurrency
CN108804576A (en) A kind of domain name hierarchical structure detection method based on link analysis
Ma et al. Measuring China’s urban digital economy
CN116307211A (en) Wind power digestion capability prediction and optimization method and system
Bi et al. Accurate prediction of workloads and resources with multi-head attention and hybrid LSTM for cloud data centers
Lv Research on the Application of Adaptive Matching Tracking Algorithm Fused with Neural Network in the Development of E‐Government
CN117200208B (en) User-level short-term load prediction method and system based on multi-scale component feature learning
Godahewa et al. A strong baseline for weekly time series forecasting
Liu et al. Ace-Sniper: Cloud-Edge Collaborative Scheduling Framework With DNN Inference Latency Modeling on Heterogeneous Devices
Zeng et al. Local epochs inefficiency caused by device heterogeneity in federated learning
Pavlenko Estimation of the upper bound of seismic hazard curve by using the generalised extreme value distribution
CN114596054A (en) Service information management method and system for digital office
CN110516853B (en) Lean elimination time prediction method based on under-sampling improved AdaBoost algorithm
CN112926198A (en) Credibility evaluation method and system of MSaaS simulation architecture
Du [Retracted] Application of Internet of Things Architecture in Intelligent Classroom Teaching Analysis in Colleges and Universities

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C56 Change in the name or address of the patentee
CP03 Change of name, title or address

Address after: 210042 Xuanwu District, Xuanwu District, Jiangsu, Nanjing, No. 699-22, building 18

Patentee after: CERTUSNET CORP.

Address before: 100084 Beijing 100084-82 mailbox

Patentee before: Qinghua UNiversity