CN116149764A - Cloud desktop distribution method, device, equipment and computer storage medium - Google Patents

Cloud desktop distribution method, device, equipment and computer storage medium Download PDF

Info

Publication number
CN116149764A
CN116149764A CN202111360151.4A CN202111360151A CN116149764A CN 116149764 A CN116149764 A CN 116149764A CN 202111360151 A CN202111360151 A CN 202111360151A CN 116149764 A CN116149764 A CN 116149764A
Authority
CN
China
Prior art keywords
data
cloud
cloud desktop
distribution
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111360151.4A
Other languages
Chinese (zh)
Inventor
王洪福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Suzhou Software Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Suzhou Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Suzhou Software Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202111360151.4A priority Critical patent/CN116149764A/en
Publication of CN116149764A publication Critical patent/CN116149764A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/452Remote windowing, e.g. X-Window System, desktop virtualisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Human Computer Interaction (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a cloud desktop distribution method, a cloud desktop distribution device, cloud desktop distribution equipment and a computer storage medium, wherein the cloud desktop distribution method comprises the following steps: acquiring cloud desktop index data and initial distribution quantity of cloud desktops; carrying out predictive analysis on cloud desktop index data by using a preset analysis model, and determining distribution coefficients; and compensating the initial distribution quantity of the cloud desktops by using the distribution coefficient, and determining the target distribution quantity of the cloud desktops. Therefore, the distribution coefficient is determined through the cloud desktop index data and the preset analysis model, and the initial distribution quantity of the cloud desktops is compensated by utilizing the distribution coefficient, so that the final target distribution quantity of the cloud desktops is obtained, and the computing power of the resource pool can be effectively exerted when the cloud desktops are partitioned, and the waste of hardware resources is avoided.

Description

Cloud desktop distribution method, device, equipment and computer storage medium
Technical Field
The present disclosure relates to the field of cloud computing technologies, and in particular, to a cloud desktop allocation method, device, equipment, and computer storage medium.
Background
At present, mobile cloud service is vigorously developed, and in the field of software as a service (Software as service, saaS), a mobile cloud desktop product is a virtual desktop product based on a virtualization technology, so that an efficient virtualized desktop solution is provided for enterprise clients, and the office efficiency of various industries is improved. Meanwhile, a targeted cloud desktop solution is provided for clients with personalized requirements. With the expansion of service scale and the high-speed development of cloud desktops, how to improve the utilization rate of the server and exert the maximum computing power of the server are the problems that all cloud desktop manufacturers must consider.
Disclosure of Invention
The cloud desktop distribution method, device, equipment and computer storage medium can effectively exert the computing power of a resource pool, and avoid the waste of hardware resources.
The technical scheme of the application is realized as follows:
in a first aspect, an embodiment of the present application provides a cloud desktop allocation method, where the method includes:
acquiring cloud desktop index data and initial distribution quantity of cloud desktops;
carrying out predictive analysis on the cloud desktop index data by using a preset analysis model, and determining an allocation coefficient;
and compensating the initial distribution quantity of the cloud desktops by using the distribution coefficient, and determining the target distribution quantity of the cloud desktops.
In a second aspect, an embodiment of the present application provides a cloud desktop distribution apparatus, where the cloud desktop distribution apparatus includes an acquisition unit, an analysis unit, and a determination unit, where,
the cloud desktop distribution system comprises an acquisition unit, a distribution unit and a distribution unit, wherein the acquisition unit is configured to acquire cloud desktop index data and initial distribution quantity of cloud desktops;
the analysis unit is configured to conduct predictive analysis on the cloud desktop index data by using a preset analysis model, and determine distribution coefficients;
and the determining unit is configured to compensate the initial distribution quantity of the cloud desktops by using the distribution coefficient and determine the target distribution quantity of the cloud desktops.
In a third aspect, embodiments of the present application provide an electronic device comprising a memory and a processor, wherein,
a memory for storing a computer program capable of running on the processor;
and the processor is used for executing the cloud desktop distribution method according to the first aspect when the computer program is run.
In a fourth aspect, embodiments of the present application provide a computer storage medium storing a computer program, where the computer program is executed by at least one processor to implement the cloud desktop allocation method according to the first aspect.
According to the cloud desktop distribution method, device and equipment and the computer storage medium, cloud desktop index data and initial cloud desktop distribution quantity are obtained; carrying out predictive analysis on cloud desktop index data by using a preset analysis model, and determining distribution coefficients; and compensating the initial distribution quantity of the cloud desktops by using the distribution coefficient, and determining the target distribution quantity of the cloud desktops. Therefore, the distribution coefficient is determined through the cloud desktop index data and the preset analysis model, and the initial distribution quantity of the cloud desktops is compensated by utilizing the distribution coefficient, so that the final target distribution quantity of the cloud desktops is obtained, and the computing power of the resource pool can be effectively exerted when the cloud desktops are partitioned, and the waste of hardware resources is avoided.
Drawings
Fig. 1 is a flow chart of a cloud desktop allocation method provided in the related art;
fig. 2 is a flow chart of a cloud desktop allocation method according to an embodiment of the present application;
fig. 3 is a detailed flowchart of a cloud desktop allocation method provided in an embodiment of the present application;
FIG. 4 is a detailed flowchart of data preprocessing according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of a model structure of a BP neural network according to an embodiment of the present application;
fig. 6 is a schematic diagram of a learning process of a BP neural network according to an embodiment of the present application;
fig. 7 is a schematic diagram of a network structure of a preset analysis model according to an embodiment of the present application;
fig. 8 is a schematic diagram of a composition structure of a cloud desktop distribution device according to an embodiment of the present application;
fig. 9 is a schematic diagram of a composition structure of an electronic device according to an embodiment of the present application;
fig. 10 is a schematic diagram of a composition structure of another electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to be limiting. It should be noted that, for convenience of description, only a portion related to the related application is shown in the drawings.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
It should be noted that the term "first\second\third" in relation to the embodiments of the present application is merely to distinguish similar objects and does not represent a specific ordering for the objects, it being understood that the "first\second\third" may be interchanged in a specific order or sequence, where allowed, to enable the embodiments of the present application described herein to be practiced in an order other than that illustrated or described herein.
At present, a specific flow of cloud desktop assignment by a cloud desktop manufacturer may refer to fig. 1, which shows a flow diagram of a cloud desktop assignment method provided by the related art. As shown in fig. 1, it comprises the steps of:
S101, a client subscribes to a cloud desktop.
S102, inputting preset information and selecting a resource pool.
S103, judging whether the resource pool can open the cloud desktop according to the user input information.
S104, distributing cloud desktops.
As shown in fig. 1, when cloud desktop allocation is performed on one resource pool, the number of allocated cloud desktops depends on the determination method in step S103, and if the determination result is yes, cloud desktop allocation may be performed; otherwise, returning to step S102, information is re-input or a resource pool is selected.
Currently, for step S103, the general calculation method in the industry calculates according to the number of cores of a central processing unit (Central Processing Unit, CPU) in the resource pool, the main frequency of the CPU, the memory, the hard disk, and other data, in combination with the optimal percentages provided by the virtual computing manufacturer, and the percentages of different manufacturers are different. The Virtual Machine (VM) suggested percentages are typically between 1:5 and 1:10. The main parameters are the core of the CPU and the main frequency of the CPU. For example, the number of CPU cores in one resource pool is 2 CUPs with 32 cores, and then the number of cloud desktops with 2 cores and 4G can be calculated according to the percentage of 1:5 is: 2×32++2×5=160, i.e. 160 cloud desktops.
However, in practical applications, it is found that, by adopting the method provided by the related technology to perform cloud desktop allocation, the allocation of the resource pool may not match with the actual use due to the fact that the parameter of the cloud desktop allocation number is more than the percentage of the CPU. For example, a resource pool is allocated with 160 cloud desktops, but a customer service center scene, that is, 80 cloud desktops are used in daytime and 80 cloud desktops are used at night, so that when 80 cloud desktops are used in daytime, the other 80 cloud desktops are not used, and the night is the same, and resource waste is caused. For another example, in a research and development scenario, people work in daytime and have relatively large concurrency, and the power consumption of the CPU is relatively large, if the cloud desktop is distributed according to 160 suggestions, the user can get stuck in the use process, and poor user experience is caused.
Based on this, the embodiment of the application provides a cloud desktop distribution method, and the basic idea of the method is as follows: acquiring cloud desktop index data and initial distribution quantity of cloud desktops; carrying out predictive analysis on cloud desktop index data by using a preset analysis model, and determining distribution coefficients; and compensating the initial distribution quantity of the cloud desktops by using the distribution coefficient, and determining the target distribution quantity of the cloud desktops. Therefore, the distribution coefficient is determined through the cloud desktop index data and the preset analysis model, and the initial distribution quantity of the cloud desktops is compensated by utilizing the distribution coefficient, so that the final target distribution quantity of the cloud desktops is obtained, and the computing power of the resource pool can be effectively exerted when the cloud desktops are partitioned, and the waste of hardware resources is avoided.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
In an embodiment of the present application, referring to fig. 2, a flow diagram of a cloud desktop allocation method provided in an embodiment of the present application is shown. As shown in fig. 2, the method may include:
s201, acquiring cloud desktop index data and initial distribution quantity of cloud desktops.
The cloud desktop distribution method provided by the embodiment of the application can be applied to a device for cloud desktop distribution or an electronic device integrated with the device. Here, the electronic device may be, for example, a computer, a smart phone, a tablet computer, a notebook computer, a palm computer, a personal digital assistant (Personal Digital Assistant, PDA), a navigation device, a server, or the like, which is not particularly limited in the embodiments of the present application.
It should be further noted that, when the cloud desktop distribution is performed, based on the cloud desktop index data, the cloud desktop index coefficient is predicted and analyzed by using preset analysis, so that a distribution coefficient can be obtained, the initial distribution quantity of the cloud desktop is compensated by using the distribution coefficient, and the final cloud desktop target distribution quantity is obtained by compensation. Therefore, when the resource pool is allocated, the computing power of the hardware resource can be fully exerted, and the resource waste is avoided.
The cloud desktop index data mainly refer to index data which can influence the distribution quantity of cloud desktops and the quality of the cloud desktops. In the actual use process, when the cloud desktop index data is acquired, a selection window or an input interface can be provided, and the cloud desktop index data is input by a client or the client requirements are analyzed and extracted to determine the cloud desktop index data.
The initial cloud desktop allocation quantity mainly refers to cloud desktop recommended quantity determined by a cloud computing manufacturer in combination with data such as core number of a resource pool, CPU main frequency, memory, hard disk and the like and optimal percentage, namely the cloud desktop allocation quantity is determined based on the conventional allocation mode at present. Different cloud computing manufacturers provide different percentages, and the specifications of the resource pools are different; therefore, the initial allocation quantity of cloud desktops of different resource pools of different factories is also different.
In some embodiments, the cloud desktop index data may include at least one of: bandwidth, carrier type, usage scenario, industry, desktop usage, and desktop usage preference. .
It should be noted that, after comprehensive analysis is performed on the historical data of the mobile cloud desktop, the factors such as bandwidth, operator type, use scene, industry, desktop use rate and desktop use preference have a great influence on cloud desktop allocation. Therefore, the embodiment of the application uses bandwidth, operator type, usage scenario, industry, desktop usage rate and desktop usage preference as cloud desktop index data for determining the distribution coefficient.
Wherein, the bandwidth mainly refers to the bandwidth of the user; carrier types may include mobile, corporate, telecommunications, etc.; the use scenario mainly can comprise public network, private line, virtual private network (Virtual Private Network, VPN) and the like; the industries may mainly include finance, medical treatment, transportation, education, government affairs, business hall, chain drug store, etc.; desktop usage preferences may include mainly day and night.
It should be noted that, the specific types of cloud desktop index data listed herein are only exemplary, and in actual use, the types of cloud desktop index data may be increased or decreased in combination with actual requirements, which is not specifically limited in the embodiments of the present application. For example, the industry may include more types or select other industry divisions, desktop usage preferences may be divided by time period, etc.
For an initial allocation amount of cloud desktops, in some embodiments, the method may further comprise:
acquiring the CPU core number and the excess percentage of the resource pool;
and determining the initial allocation quantity of the cloud desktop according to the CPU core number and the excess percentage.
It should be noted that, the initial allocation number of the cloud desktop may be determined according to the number of CPU cores and the percentage of the resource pool. For example, the initial allocation number of cloud desktops may be determined with reference to the following equation (1):
Figure BDA0003358904370000041
The CPU core number of the resource pool is the sum of the core numbers of all CPUs in the resource pool, and if the CPU specifications of the resource pool are the same, the CPU core number of the resource pool is the core number of a single CPU in the resource pool multiplied by the total number of the CPUs. For example, for a resource pool with a CPU core number of 64, according to 5 times of the percentage calculation, if the CPU core number of the cloud desktop to be allocated is 2, the initial allocation number of the cloud desktop is 64/2×5=160.
S202, carrying out predictive analysis on cloud desktop index data by using a preset analysis model, and determining distribution coefficients.
It should be noted that, in the embodiment of the present application, prediction analysis is performed on the acquired cloud desktop index data by using a preset analysis model, so as to determine the distribution coefficient.
For a preset analytical model, in some embodiments, the method may further comprise:
acquiring a sample data set; the sample data set comprises N groups of sample data, each group of sample data comprises cloud desktop sample index data and corresponding sample distribution coefficients, and N is an integer greater than zero;
training the preset neural network by using the N groups of sample data to obtain a preset analysis model.
It should be noted that, in the embodiment of the present application, when cloud desktop allocation is performed, the influence of the user image on the cloud desktop allocation is fully considered, so that a sample data set including cloud desktop index data is adopted to train a preset neural network model to determine a preset analysis model. That is, for N sets of sample data in the sample data set, each set includes cloud desktop sample index data (taking as an example cloud desktop sample index data including bandwidth, operator type, usage scenario, industry, desktop usage, and desktop usage preference) and a corresponding sample distribution coefficient.
In some embodiments, the preset neural network is a BP neural network having a three-layer structure; the three-layer structure comprises an input layer, a hidden layer and an output layer.
It should be noted that, the Back-propagation neural network (Back-Propagation Network, BP neural network) has high nonlinearity and strong generalization capability, and compared with algorithms such as logistic regression, classification, clustering, and the like, a model obtained based on BP neural network training can obtain more accurate prediction results. Therefore, the BP neural network is preferably selected as the preset neural network in the embodiment of the present application, but other network models may be selected as the preset neural network in the embodiment of the present application, which is not particularly limited.
In some embodiments, acquiring a sample dataset may include:
acquiring N groups of original sample data, wherein each group of original sample data comprises cloud desktop original index data and corresponding sample distribution coefficients;
in the first group of original sample data, carrying out data segmentation and extraction processing on the first cloud desktop original index data to obtain at least one numerical value data and at least one classification data; the first group of original sample data represents any one of N groups of original sample data, and the first group of original sample data comprises first cloud desktop original index data and corresponding first sample distribution coefficients;
Carrying out standardization processing on at least one numerical value data to obtain at least one standardized data;
converting at least one classified data to obtain at least one flag variable;
splicing at least one piece of standardized data, at least one mark variable and a first sample distribution coefficient to obtain a first group of sample data;
after obtaining the N sets of sample data, a sample data set is composed from the N sets of sample data.
It should be noted that, when acquiring a sample data set, N sets of original sample data may be first acquired from a large amount of existing historical data. According to the method and the device, the historical data can be cleaned and screened to obtain N groups of original sample data.
Each set of original sample data in the N sets of original sample data is processed in the following manner, any one set of original sample data in the N sets of original sample data is recorded as first set of original sample data, and the first set of original sample data comprises first cloud desktop original index data and corresponding first sample distribution coefficients.
Illustratively, referring to table 1, an example of a first set of raw sample data is shown.
TABLE 1
Figure BDA0003358904370000061
As shown in table 1, for the resource pool 1, the bandwidth is 20 megameters (M), the operator type is mobile, the use scenario is public network, the industry is finance, the desktop use rate is 0.8, and the desktop use preference is daytime; the corresponding distribution coefficient can be calculated to be 0.09 according to the historical use data.
For the original sample data, as the attributes of different cloud desktop index data are different, the horizontal difference between the different attributes is larger, and if the original sample data is directly used for model training, the prediction effect of the model can be influenced.
Therefore, in the embodiment of the application, first data segmentation and extraction processing are performed on first cloud desktop original index data in a first group of original sample data to obtain 7 fields including resource pools 1, 20, mobile, public network, finance, 0.8 and daytime; the resource pool 1 is not involved in the subsequent modeling, and is segmented out and is not processed; 20 corresponding to bandwidth (megabits) and 0.8 corresponding to desktop utilization rate are two digital data, the two digital data are different in units, and dimensional difference exists, so that standardized processing is needed; mobile, public, financial and daytime are four classified data that need to be converted into logo variables.
Specifically, when the bandwidth (mega) and the desktop utilization are normalized, the normalization process may be performed by means of zero-mean (Z-score) normalization process, maximum-minimum (Max-Min) normalization process, or maximum absolute value (MaxAbs) normalization process, so as to obtain corresponding numerical data.
And converting the type of the operator, the use scene, the industry and the desktop use preference to obtain the corresponding mark variable. For example: converting the operator type into three fields of whether to move, whether to communicate and whether to carry out telecom; converting the use scene into three fields of whether to public network, whether to private line and whether to VPN; converting the industry into seven fields of finance, medical treatment, transportation, education, government affairs, business hall and chain pharmacy; desktop usage preferences translate into two fields, whether daytime or nighttime. If so, it is marked as 1, and if not, it is marked as 0, so that 4 classified data can be converted into a flag variable represented by 1 or 0.
The obtained numerical data and the flag variable are spliced together with the corresponding distribution coefficient again, so that the original sample data in the table 1 can be converted into the sample data shown in the table 2.
TABLE 2
Figure BDA0003358904370000062
Note that, for the cloud desktop index data acquired in step S101, it refers to cloud desktop index data as shown in table 2. In practical use, when the client inputs the index data, the client may be in a format of cloud desktop original index data, and then the cloud desktop distribution device converts the cloud desktop original index data into cloud desktop index data shown in table 2, or the client directly inputs the cloud desktop index data shown in table 2, which is not limited in particular in this embodiment of the present application.
The N sets of raw sample data are processed in such a way that N sets of sample data can be obtained, which form a sample data set.
Training the preset neural network by using the sample data set to obtain a preset analysis model.
In some embodiments, training the preset neural network with the N sets of sample data to obtain the preset analysis model may include:
initializing a preset neural network to obtain an initial neural network;
Correcting the initial neural network by using the k-th group of sample data to obtain a corrected neural network; wherein k is an integer greater than zero and less than or equal to N;
calculating an error function of the corrected neural network;
if the error function meets the preset condition, determining the corrected neural network as a preset analysis model;
if the error function does not meet the preset condition, performing 1 adding operation on the k, determining the corrected neural network as an initial neural network, and continuously performing the step of correcting the initial neural network by using the k-th set of sample data until the error function meets the preset condition.
Taking a preset neural network as an example of the BP neural network, when training the BP neural network by using a sample data set, firstly, initializing the BP neural network, namely, assigning random numbers in a section (-1, 1) to each weight (also called a connection weight) of the BP neural network, setting an error function, and presetting calculation precision and maximum learning times. After the initialization processing of the BP neural network is completed, the initial BP neural network is obtained.
Then randomly selecting the kth group of sample data from the sample data set, and correcting the initial BP neural network by using the kth group of sample data, specifically correcting the weight of the initial BP neural network, thereby obtaining a corrected BP neural network; calculating an error function of the corrected BP neural network, and if the error function meets a preset condition, namely that the error function reaches preset precision (the value of the error function is smaller than or equal to the preset precision), determining the corrected BP neural network as a preset analysis model, and finishing model training; or the learning times of the model reach the maximum learning times, the model training can be determined to be completed, and a preset analysis model is obtained.
If the error function does not meet the preset condition, namely the error function does not reach the preset precision, and the learning times of the model do not reach the maximum learning times, the model continues to learn and train. And determining the corrected BP neural network as an initial BP neural network, and then continuing training the initial BP neural network by using the next group of sample data until the error function meets a preset condition or the learning times reach the maximum learning times, so as to obtain a preset analysis model.
That is, in the embodiment of the present application, the input of the preset analysis model is cloud desktop index data, and the output is the corresponding distribution coefficient. And predicting and analyzing the acquired cloud desktop index data by using a preset analysis model to obtain the corresponding distribution coefficient.
And S203, compensating the initial distribution quantity of the cloud desktops by using the distribution coefficient, and determining the target distribution quantity of the cloud desktops.
It should be noted that, after the allocation coefficient is obtained, the initial allocation number of the cloud desktop may be compensated by using the allocation coefficient, so as to determine the final target allocation number of the cloud desktop.
In some embodiments, compensating the initial allocation number of the cloud desktop by using the allocation coefficient, and determining the target allocation number of the cloud desktop may include:
Adding one to the distribution coefficient to obtain a compensation coefficient;
and multiplying the compensation coefficient and the initial distribution quantity of the cloud desktop to obtain the target distribution quantity of the cloud desktop.
It should be noted that, a specific way to compensate the initial allocation number of the cloud desktop by using the allocation coefficient may be: and adding one to the distribution coefficient to obtain a compensation coefficient, and multiplying the compensation coefficient by the initial distribution number of the cloud desktop to obtain the final target distribution number of the cloud desktop.
In this way, the input cloud desktop index data is analyzed by combining the BP neural network and the big data algorithm, after the distribution coefficient is predicted, the distribution coefficient is further compensated to obtain the cloud desktop target distribution quantity, and the final obtained result is ensured to be closer to the actual situation.
The embodiment provides a cloud desktop distribution method, which comprises the steps of obtaining cloud desktop index data and initial distribution quantity of cloud desktops; carrying out predictive analysis on cloud desktop index data by using a preset analysis model, and determining distribution coefficients; and compensating the initial distribution quantity of the cloud desktops by using the distribution coefficient, and determining the target distribution quantity of the cloud desktops. In this way, the cloud desktop index data is predicted and analyzed by using a preset analysis model to determine the distribution coefficient, and the initial distribution quantity of the cloud desktops is compensated by using the distribution coefficient to obtain the final target distribution quantity of the cloud desktops, so that the calculation power of the resource pool can be effectively exerted when the cloud desktops are divided into the resource pool, the waste of hardware resources is avoided, and the utilization rate of servers in the resource pool is improved; in addition, when the distribution coefficient is determined, the influence of cloud desktop index data such as bandwidth, operator type, use scene, belonging industry, desktop use rate, desktop use preference and the like on cloud desktop distribution is fully considered, so that more accurate distribution coefficient can be obtained, a prediction result is more accurate, and the initial distribution quantity of the cloud desktops determined in a conventional mode is effectively supplemented.
In another embodiment of the present application, referring to fig. 3, a detailed flowchart of a cloud desktop allocation method provided in an embodiment of the present application is shown. As shown in fig. 3, the detailed flow may include:
s301, sample collection and pretreatment.
It should be noted that, in order to solve the problem of resource waste in the resource pool when the cloud desktop is allocated in the related art, the embodiment of the application provides a calculation method for the number of newly opened cloud desktops in the resource pool, see formula (2), as follows,
x=n×(1+θ) (2)
wherein x represents the target allocation quantity of the cloud desktop, n represents the initial allocation quantity of the cloud desktop, and θ represents the allocation coefficient of a preset resource pool. In the embodiment of the application, n is generally the number of cloud desktops calculated by using industry standard percentages, for example, the number of cloud desktop allocation recommended by a cloud service provider.
In order to ensure that the maximum computing power can be realized for the distributed cloud desktop quantity, an optimal solution of θ needs to be found, that is, the problem of calculating the target distribution quantity x of the cloud desktops of the resource pool can be converted into the problem of finding the optimal solution of the preset resource pool distribution coefficient θ. Alternatively, the embodiment of the application can calculate the optimal solution of θ through a BP neural network algorithm.
When the cloud desktop target distribution number is calculated based on the BP neural network algorithm, model training is needed to be carried out by utilizing a sample data set, so that sample acquisition and preprocessing are firstly carried out.
For the comprehensive analysis of the historical data of the mobile cloud desktop in recent years, besides the constraint factors (such as the CPU core number of a resource pool, CPU main frequency, memory, hard disk and other data) of the server, the factors affecting the cloud desktop distribution can also include: the bandwidth of the user, the type of operator (e.g., mobile, telecommunications, UNICOM), the use scenario (e.g., public network, private line, VPN), the industry to which it belongs (e.g., finance, medical, transportation, education, government affairs, business hall, chain drug store, etc.), desktop usage preferences (e.g., daytime, evening), etc. Therefore, when the original sample data is collected, the embodiment of the application mainly collects the information such as the bandwidth, the type of operators, the use scene, the industry, the desktop use rate, the desktop use preference and the like of the user.
For example, in view of the current continual effort on cloud desktops for mobile clouds in recent years, 5 ten thousand+ station cloud desktops have been developed, and a large amount of relevant data has been accumulated. Therefore, when sample data are collected, order data of a user and desktop allocation data of a resource pool can be derived in advance; then deleting redundant data through data cleaning, namely deleting data which does not actually affect the target allocation quantity of the cloud desktop, invalid useless data and the like, and screening out the optimal allocation data which is adjusted manually according to marks in the actual working process; finally, the desktop distribution data index of each resource pool is used as input to obtain the original sample data shown in table 3, and as the collected data are known data, namely, for each resource pool, the initial distribution number of the cloud desktops and the target distribution number of the cloud desktops corresponding to the data are known, the distribution coefficient (namely, the sample distribution coefficient) corresponding to each resource pool can be calculated according to the formula (2).
TABLE 3 Table 3
Figure BDA0003358904370000091
The data shown in table 3 are several groups of original sample data, considering that the attributes of the index data of each cloud desktop are different, the level difference between the indexes is very large, if the original sample data is directly used for model training, the absolute effect of the index data with larger numerical value in the evaluation model is more prominent and important, and the effect of the index data with smaller numerical value is possibly insignificant, so that the comparison and synthesis between the index data of different cloud desktops cannot be directly performed.
In order to solve the problem, the embodiment of the application can unify the comparison standards, so that the reliability of the result is ensured. Specifically, before model training, the original index data of the cloud desktop in the original sample data can be standardized, and the original index data of the cloud desktop is converted into dimensionless standardized data with innumerable magnitude differences, so that the influence of different indexes due to different attributes is eliminated, and the result is more comparable.
The following describes standardization of the cloud desktop raw index data, taking the raw sample data shown in table 3 as an example.
It should be noted that, the resource pool field in table 3 does not participate in calculation and analysis, and needs to be segmented; the units of the two fields of bandwidth (megabits) and desktop usage are different, that is, there is a dimension difference, and the two fields cannot directly participate in analysis, so that standardized processing is required. And fields such as operator type, scene, industry, desktop use rate, desktop use preference and the like are character string types, and can be converted into classification type data for facilitating subsequent analysis and calculation.
Specifically, referring to fig. 4, a detailed flow chart of data preprocessing provided in the embodiment of the present application is shown. As shown in fig. 4, the detailed flow may include:
s401, dividing data, and extracting a field to be processed.
It should be noted that, firstly, the original index data of the cloud desktop is segmented and extracted, when the data is segmented, the resource pool field is removed, and the field to be processed is extracted, for example: bandwidth (mega), carrier type, scenario, industry, desktop usage, and desktop usage preferences.
S402, judging whether the field is numerical data or classified data.
The extracted field is judged as numerical data or classification data. In the embodiment of the application, the fields corresponding to the bandwidth (megabits) and the desktop use rate are digital data; the fields corresponding to the operator type, scenario, industry, desktop usage and desktop usage preference are classified data.
It should be noted that, if the determination result is the numerical data, step S403 is executed; if the determination result is classification data, step S404 is performed.
S403, converting the numerical data into standardized data.
It should be noted that, when converting numerical data into standardized data, the embodiments of the present application may be performed in one or more of the following three standardized manners: z-score normalization, max-Min normalization, and MaxAbs normalization.
That is, in actual use, the most suitable standardized mode of the three standardized modes (or other standardized modes) can be selected in combination with the data characteristics to process the digital data, and then training is performed to obtain a preset analysis model; alternatively, two or three types of log data in three standardized manners may be used to process and execute subsequent model training, and a model with the best training effect may be selected as a final preset analysis model.
For the Z-score normalization process, the data set after Z-score normalization is normal distribution with 0 as the mean and 1 as the standard deviation; z-score normalization is a decentralization method that alters the distribution structure of raw data and is not suitable for sparse data use.
The Z-score formula is of formula (3), specifically as follows,
Figure BDA0003358904370000101
where x' is the data after the Z-score normalization process, x is the raw data before the Z-score normalization process,
Figure BDA0003358904370000111
the average value of the raw data, SD, is the standard deviation of the raw data.
For the Max-Min normalization processing, the data after the Max-Min normalization can fall into the [0,1] interval, and the Max-Min normalization has the greatest advantage of maintaining the structure of the original data, and is suitable for sparse data.
The formula of Max-Min is formula (4), specifically as follows,
Figure BDA0003358904370000112
wherein x' is data after the normalization processing of Max-Min, x is original data before the normalization processing of Max-Min, min is the minimum value in the original data, and Max is the maximum value in the original data.
For MaxAbs normalization processing, the data after the MaxAbs normalization also falls into a section, but the section is [ -1,1], and the MaxAbs normalization can also keep the original distribution structure of the data, so that the data can also be used for sparse data.
S404, converting the classified data into a mark variable.
It should be noted that, the classification data is data having no intrinsic mathematical meaning, and is only used to distinguish the category of things. In order for classification data to be representative of each line of data, it needs to be converted into a logo variable. For example: for the operator type field, it can be changed to three fields: whether to move, whether to communicate, whether to telecommunications, if so, it is marked as 1, if not, it is marked as 0, and other classification data are the same.
S405, data stitching and remodelling.
It should be noted that, after processing each original cloud desktop index data according to steps S401 to S403 to obtain standardized data and a flag variable, data splicing and reshaping are performed. The results obtained after the re-splicing are shown in Table 4.
TABLE 4 Table 4
Figure BDA0003358904370000113
In this way, a sample data set as shown in table 4 is obtained, wherein one row in table 4 represents one sample data in the sample data set. After (or before or simultaneously with) the sample data set is obtained, a BP neural network (also referred to as a BP neural network model) can be constructed.
S302, constructing a BP neural network.
The BP neural network is also called a back propagation neural network, and through training of sample data, the network weight and the threshold value are continuously corrected to enable the error function to drop along the negative gradient direction and approach the expected output. The neural network model is widely applied, and is used for function approximation, model identification and classification, data compression, time sequence prediction and the like.
Referring to fig. 5, a schematic diagram of a model structure of a BP neural network according to an embodiment of the present application is shown. As shown in fig. 5, the BP neural network is composed of an input layer, a hidden layer and an output layer, the hidden layer may have one or more layers, fig. 5 is an mxkxn three-layer BP neural network, where the input layer includes m neurons, the hidden layer includes k neurons, and the output layer includes n neurons. When the model is trained, the network selects a Sigmoid function (also called an S function), and the weight and the threshold value of the network are continuously adjusted through the inverse error function so that the error function is extremely small.
The BP neural network has high nonlinearity and strong generalization capability, but also has the defects of low convergence speed, more iteration steps, easy sinking into local minima, poor global searching capability and the like. Therefore, the BP neural network can be optimized by using the genetic algorithm, so that a better search space is found out in the analysis space, and then the BP neural network searches for an optimal solution in a smaller search space.
The transfer function employed by the BP network is a nonlinear transformation function S function. It features that the function itself and its derivative are continuous, so it is very convenient in processing. The S function has two types of unipolar S functions and bipolar S functions.
Wherein the unipolar S-shaped function is defined as formula (5), specifically as follows,
Figure BDA0003358904370000121
where f (x) represents the output of the hidden layer neurons in the unipolar S-type function and x represents the input of the hidden layer neurons in the unipolar S-type function.
The bipolar sigmoid function is defined as in equation (6), specifically as follows,
Figure BDA0003358904370000122
where f (x) represents the output of the hidden layer neuron in the bipolar S-shaped function and x represents the input of the hidden layer neuron in the bipolar S-shaped function.
In the embodiment of the application, the transfer function may select a unipolar S-type function or a bipolar S-type function, or may respectively use two functions to perform model training, respectively obtain two models, and determine a model with more accurate analysis results as a preset analysis model. Illustratively, the following descriptions are each described by way of example as a unipolar sigmoid function.
When using an S-type activation function, the input is as in equation (7), specifically as follows,
net=x 1 w 1 +x 2 w 2 +...+x n w n (7)
where net represents the input of hidden layer neurons, x i I-th information representing input layer, w i Representing weights from hidden layer neurons to input layer ith neurons; wherein i is more than or equal to 1 and n is more than or equal to n, and n represents the number of neurons of an input layer.
The output is represented by the formula (8), specifically as follows,
Figure BDA0003358904370000131
where y (i.e., f (net)) represents the output of the hidden layer neuron and net represents the input of the hidden layer neuron.
The derivative of the output is shown as formula (9), specifically as follows,
Figure BDA0003358904370000132
where f' (net) represents the derivative of the output of the hidden layer neuron, net represents the input of the hidden layer neuron, and y represents the output of the hidden layer neuron.
The constructed BP neural network model is trained based on the obtained sample data set acquired in step S301, and the detailed steps are as follows in step S303.
S303, deducing a neural network learning algorithm.
The neural network continuously changes the connection weights of the network under the stimulation of the external input sample so that the output of the network is continuously close to the desired output (in the embodiment of the application, the desired output is the sample distribution coefficient in the sample data). The core idea is to reversely transfer the output error layer by layer to the input layer through the hidden layer in a certain form, namely, the error is distributed to all units of each layer, so as to obtain the error signal of each layer unit, and thus, the weight of each unit is corrected. Similar to the gradient descent function, the gradient descent function updates the parameter values each time it is learned with one sample training so that the cost function becomes smaller and smaller. The BP neural network is to assign random initial value to the weight value of the network, calculate the last layer (output layer), if the output result has error with the actual value, then carry on the back propagation algorithm of the error, optimize the parameter value of each layer.
For the learning process of the BP neural network, refer to fig. 6, which shows a schematic diagram of the learning process of the BP neural network provided in the embodiment of the present application. As shown in fig. 6, the learning process may include:
s601, initializing a network.
The network initialization is to initialize the BP neural network, assign random numbers in a section (-1, 1) to each connection weight of the network, set an error function E, and set a calculation accuracy value (epsilon) and a maximum learning number (M).
S602, randomly selecting a kth input sample and a corresponding expected output.
The kth input sample and the corresponding expected output are the kth set of cloud desktop sample index data in the sample data set and the corresponding sample distribution coefficient.
The kth input sample is represented by equation (10), which is described in detail below,
x(k)=(x 1 (k),x 2 (k),...,x n (k)) (10)
where x (k) represents the kth input sample, x i (k) Representing the input of each input layer neuron, wherein i.ltoreq.n is equal to or greater than 1 and n represents the number of the input layer neurons.
The desired output for the kth input sample is shown in equation (11), and is described in detail below,
d 0 (k)=(d 1 (k),d 2 (k),...,d q (k)) (11)
d 0 (k) Represents the expected output corresponding to the kth input sample, d o (k) And representing the expected input corresponding to each output layer neuron, wherein o is more than or equal to 1 and less than or equal to q, and q represents the number of the output layer neurons.
S603, calculating input and output of each neuron of the hidden layer.
The hidden layer neurons are input as shown in equation (12), which is described in detail below,
Figure BDA0003358904370000141
wherein hi h (k) Representing the input of the h neuron of the hidden layer, w ih Representing the weight, x, from the h neuron of the hidden layer to the i neuron of the input layer i (k) Information representing the ith neuron of the kth input sample at the input layer, b h A threshold value representing the h neuron of the hidden layer; wherein, h is more than or equal to 1 and less than or equal to p, and p represents the number of neurons in the hidden layer.
The output of hidden layer neurons is shown in equation (13), which is described in detail below,
ho h (k)=f(hi h (k))h=1,2,…,p (13)
wherein ho h (k) Represents the output of the kth input sample at the h neuron of the hidden layer, h is more than or equal to 1 and less than or equal to p, p represents the number of the neurons of the hidden layer, hi h (k) Representing the input of the h neuron of the hidden layer.
The input to the output layer neurons is described in equation (14), which is described in detail below,
Figure BDA0003358904370000142
wherein yi o (k) Represents the input of the kth input sample in the output layer of the (o) th neuron, wherein o is more than or equal to 1 and less than or equal to q, q represents the number of the output layer neurons, and w ho Representing weights from the h neuron of the hidden layer to the o neuron of the output layer, ho h (k) Representing the output of the kth input sample at the h neuron of the hidden layer, b o Representing the threshold of the output layer o-th neuron.
The output of the output layer neurons is shown in equation (15), which is described in detail below,
yo o (k)=f(yi o (k))o=1,2,...,q (15)
wherein yo is o (k) Representing the output of the kth input sample at the output layer of the (o) th neuron, yi o (k) Representing the input of the kth input sample at the output layer of the nth neuron.
S604, calculating partial derivatives of the error function on each neuron of the output layer.
The partial derivative of the error function with respect to each neuron of the output layer is shown in equation (16), and is described in detail below,
Figure BDA0003358904370000143
/>
wherein,,
Figure BDA0003358904370000144
representing the partial derivative of the error function e on the weight from the h neuron of the hidden layer to the o neuron of the output layer; />
Figure BDA0003358904370000145
Representing the partial derivative of the error function e with respect to the output of the o-th neuron of the output layer, calculated by equation (17);
Figure BDA0003358904370000146
representing the partial derivative of the output of the o-th neuron of the output layer to the weight of the h-th neuron of the hidden layer to the o-th neuron of the output layer, and calculating the mode parameterThe formula (18), the formula (17) and the formula (18) are as follows.
Figure BDA0003358904370000147
Figure BDA0003358904370000151
Wherein,,
Figure BDA0003358904370000152
the calculated result of (C) is marked as delta o (k),/>
Figure BDA0003358904370000153
The calculated result of (1) is denoted as ho h (k)。
S605, correcting the connection weight.
The correction value of each weight (connection weight) is calculated by referring to formula (19), specifically as follows,
Figure BDA0003358904370000154
wherein Deltaw is ho (k) And outputting a correction value of the weight from the o-th neuron of the layer to the i-th neuron of the hidden layer, wherein mu is a coefficient.
The correction mode of each weight is shown in the formula (20), and is specifically as follows,
Figure BDA0003358904370000155
wherein,,
Figure BDA0003358904370000156
weight value of the modified output layer (o) th neuron to hidden layer (i) th neuron is represented by +.>
Figure BDA0003358904370000157
Representation modificationThe weight from the o-th neuron of the previous output layer to the i-th neuron of the hidden layer, and eta is a coefficient.
S606, calculating an error function.
The calculation of the error function (also called network error, global error) is shown in equation (21), and is described in detail below,
Figure BDA0003358904370000158
wherein E represents an error function, d o (k) Representing the expected output of the kth input sample at the output layer corresponding to the kth neuron, y o (k) The actual output of the kth input sample corresponding to the kth neuron in the output layer, namely the predicted output of the BP neural network is represented by the value of o which is more than or equal to 1 and less than or equal to q, and q represents the number of neurons in the output layer.
S607, judging whether the ending condition is satisfied.
It should be noted that whether the network error (i.e., the error function) satisfies the end condition is determined. And ending the algorithm when the error function reaches the preset precision or the learning times are larger than the set maximum times, so as to obtain the preset analysis model. Otherwise, selecting the next learning sample and the corresponding expected output, returning to step S603, and entering the next round of learning until the preset analysis model is obtained.
S304, training the BP neural network.
It should be noted that, the preset analysis model takes each index of each group of data (i.e. cloud desktop index data) as input, and takes the distribution coefficient of the opened cloud desktop as output. In the embodiment of the present application, the number of nodes of all input layers is 17, and the number of nodes of the output layer is 1.
Studies have shown that neural networks with a hidden layer can approximate a nonlinear function with arbitrary precision as long as the nodes (neurons) of the hidden layer are sufficiently numerous. Therefore, the embodiment of the application adopts the three-layer multi-input single-output BP neural network with one hidden layer to establish the preset analysis model. In the network design process, the determination of the number of hidden layer neurons is important. The number of hidden layer neurons is too large, so that the network calculated amount is increased, and the problem of overfitting is easy to generate; too few neurons will affect network performance and not achieve the desired effect. The number of hidden layer neurons in the network has a direct connection with the complexity of the actual problem, the number of neurons in the input and output layers, and the setting of the desired error.
Currently, there is no explicit formula for determining the number of neurons in the hidden layer, and only some empirical formulas are used, and the final determination of the number of neurons still needs to be determined empirically and through multiple experiments. The present embodiment refers to the following empirical formula (22) in the problem of selecting the number of hidden neurons, specifically as follows,
Figure BDA0003358904370000161
Wherein n is the number of neurons in the input layer, m is the number of neurons in the output layer, and a is a constant between [1,10 ]. The number of hidden layer neurons can be calculated to be between 4 and 14 according to the formula (22). For example, taking the number of hidden neurons as 9 as an example, refer to fig. 7, which shows a network structure schematic diagram of a preset analysis model provided in an embodiment of the present application. As shown in fig. 7, the input layer of the preset analysis model includes 17 neurons, the hidden layer includes 9 neurons, the output layer includes one neuron, and the input of the preset analysis model is different cloud desktop index data (17 inputs corresponding to six indexes) and the output is an allocation coefficient (1 allocation coefficient value).
S305, substituting the number of the self-variable numbers to be predicted.
After training of the preset analysis model is completed, the self-variable data set (i.e., cloud desktop index data) to be predicted can be substituted into the preset analysis model to perform cloud desktop distribution coefficient prediction.
In one actual test, after 24 repeated learning, the network shown in fig. 7 achieves the calculation accuracy of the error function, and a preset analysis model is obtained.
After the network training is completed and the preset analysis model is obtained, the distribution coefficient can be obtained only by inputting each item of cloud desktop index data into the preset analysis model.
In the case of distribution coefficient prediction using a trained preset analytical model, for example, as shown in table 5, 5 sets of test samples to be predicted are shown.
TABLE 5
Figure BDA0003358904370000162
It should be noted that, after the allocation coefficient is obtained by predicting by using the preset analysis model, the number of desktops to be allocated in the resource pool, that is, the target allocation number of cloud desktops, may be determined according to a preset calculation mode (formula (2)) of the number of desktop allocation in the resource pool.
As can be seen from the above, when the cloud desktop is allocated, the embodiment of the application provides a compensation mechanism based on the BP neural network, aiming at the problem that the CPU's excess ratio fails to exert the maximum computing power of the hardware resources; when training the preset analysis model, the sample data used may include the following six index data: the bandwidth of the user, the type of the operator (mobile, telecom, UNICOM), the use scenario (public network, private line, VPN), the industries (finance, medical, transportation, education, government affairs, business hall, chain drug store, etc.), the desktop utilization rate, the desktop use preference (day, night); the influence factors affecting the distribution of the cloud desktop are determined through comprehensive analysis, and the influence factors are used as a sample data set (namely a training set), so that the calculation result is more accurate; before model training, carrying out normalization and classification processing on the 6 cloud desktop index data, and applying the data to prediction of big data; in the process of distributing the resource pool, calculation is carried out through a big data algorithm, various influencing factors except hardware resources are fully considered by combining the existing historical data, and the CPU super-ratio algorithm is effectively supplemented; in addition, the network model of the embodiment of the application selects a BP neural network, and the BP neural network algorithm has high nonlinearity and strong generalization capability, and through the back propagation neural network, through training of sample data, the network weight and the threshold value are continuously corrected to enable an error function to drop along the negative gradient direction and approach to expected output.
The embodiment provides a cloud desktop distribution method, which details the specific implementation of the foregoing embodiment through the foregoing embodiment, and compared with the related art, the cloud desktop distribution method provided by the embodiment of the present application has at least the following advantages: 1. when cloud desktop allocation is carried out, a compensation mechanism is considered, so that the maximum computing power of hardware resources can be exerted; 2. when cloud desktop distribution is carried out, the influence factors of user images on cloud desktop oversubscription are considered, and the influence factors of user images serving as cloud desktop distribution are not considered in the related technology; 3. when cloud desktop distribution is carried out, a big data algorithm is adopted for calculation, so that the calculation result is ensured to be closer to the actual situation, and the distribution quantity of the cloud desktops is calculated by the CPU (Central processing Unit) in the prior art; 4. the network model adopted by the embodiment of the application is a BP neural network, has high nonlinearity and strong generalization capability, and is more accurate in prediction relative to algorithms such as logistic regression, classification, clustering and the like; 5. according to the embodiment of the application, the influence factors of six cloud desktop allocations are comprehensively analyzed, various scenes of the cloud desktop allocations are fully considered, and the predicted data are more accurate.
In still another embodiment of the present application, referring to fig. 8, a schematic diagram of a composition structure of a cloud desktop distribution apparatus 80 according to an embodiment of the present application is shown. As shown in fig. 8, the cloud desktop distribution apparatus 80 may include an acquisition unit 801, an analysis unit 802, and a determination unit 803, wherein,
an obtaining unit 801 configured to obtain cloud desktop index data and an initial allocation number of cloud desktops;
the analysis unit 802 is configured to perform predictive analysis on the cloud desktop index data by using a preset analysis model, and determine an allocation coefficient;
and a determining unit 803 configured to compensate the initial allocation number of the cloud desktop by using the allocation coefficient, and determine the target allocation number of the cloud desktop.
In some embodiments, the cloud desktop index data may include at least one of: bandwidth, carrier type, usage scenario, industry, desktop usage, and desktop usage preference.
In some embodiments, referring to fig. 8, the cloud desktop distribution apparatus may further comprise a modeling unit 804 configured to obtain a sample data set; the sample data set comprises N groups of sample data, each group of sample data comprises cloud desktop sample index data and corresponding sample distribution coefficients, and N is an integer greater than zero; training the preset neural network by using the N groups of sample data to obtain a preset analysis model.
In some embodiments, the modeling unit 804 is specifically configured to obtain N sets of raw sample data, where each set of raw sample data includes cloud desktop raw index data and a corresponding sample distribution coefficient; in the first group of original sample data, carrying out data segmentation and extraction processing on the first cloud desktop original index data to obtain at least one numerical value data and at least one classification data; the first group of original sample data represents any one of N groups of original sample data, and the first group of original sample data comprises first cloud desktop original index data and corresponding first sample distribution coefficients; carrying out standardization processing on at least one numerical value data to obtain at least one standardized data; converting the at least one classified data to obtain at least one mark variable; splicing at least one piece of standardized data, at least one mark variable and a first sample distribution coefficient to obtain a first group of sample data; and after obtaining the N sets of sample data, composing a sample data set from the N sets of sample data.
In some embodiments, the modeling unit 804 is specifically configured to perform an initialization process on the preset neural network to obtain an initial neural network; correcting the initial neural network by using the k-th group of sample data to obtain a corrected neural network; wherein k is an integer greater than zero and less than or equal to N; calculating an error function of the corrected neural network; if the error function meets the preset condition, determining the corrected neural network as a preset analysis model; and if the error function does not meet the preset condition, executing 1 adding operation on the k, determining the corrected neural network as an initial neural network, and continuously executing the step of correcting the initial neural network by using the k group of sample data until the error function meets the preset condition.
In some embodiments, the preset neural network is a BP neural network having a three-layer structure; the three-layer structure comprises an input layer, a hidden layer and an output layer.
In some embodiments, the obtaining unit 801 is further configured to obtain the CPU core number and the percentage of the resource pool; and determining the initial allocation quantity of the cloud desktop according to the CPU core number and the excess percentage.
In some embodiments, the determining unit 803 is specifically configured to add one to the allocation coefficient to obtain a compensation coefficient; and multiplying the compensation coefficient and the initial distribution quantity of the cloud desktop to obtain the target distribution quantity of the cloud desktop.
It will be appreciated that in this embodiment, the "unit" may be a part of a circuit, a part of a processor, a part of a program or software, etc., and may of course be a module, or may be non-modular. Furthermore, the components in the present embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional modules.
The integrated units, if implemented in the form of software functional modules, may be stored in a computer-readable storage medium, if not sold or used as separate products, and based on such understanding, the technical solution of the present embodiment may be embodied essentially or partly in the form of a software product, which is stored in a storage medium and includes several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or processor to perform all or part of the steps of the method described in the present embodiment. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Accordingly, the present embodiment provides a computer storage medium storing a computer program which, when executed by at least one processor, implements the cloud desktop allocation method of any of the preceding embodiments.
Referring to fig. 9, a schematic diagram of a composition structure of an electronic device 90 according to an embodiment of the present application is shown. As shown in fig. 9, may include: a communication interface 901, a memory 902, and a processor 903; the various components are coupled together by a bus system 904. It is appreciated that the bus system 904 is used to facilitate connected communications between these components. The bus system 904 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration, the various buses are labeled as bus system 904 in fig. 9. The communication interface 901 is configured to receive and send signals in a process of receiving and sending information with other external network elements;
a memory 902 for storing a computer program capable of running on the processor 903;
the processor 903 is configured to execute, when executing the computer program:
acquiring cloud desktop index data and initial distribution quantity of cloud desktops;
carrying out predictive analysis on cloud desktop index data by using a preset analysis model, and determining distribution coefficients;
and compensating the initial distribution quantity of the cloud desktops by using the distribution coefficient, and determining the target distribution quantity of the cloud desktops.
It is to be appreciated that the memory 902 in embodiments of the present application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (Double Data Rate SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and Direct memory bus RAM (DRRAM). The memory 902 of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
And the processor 903 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuitry of hardware in the processor 903 or instructions in the form of software. The processor 903 described above may be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 902, and the processor 903 reads information in the memory 902, and in combination with the hardware, performs the steps of the method described above.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or a combination thereof. For a hardware implementation, the processing units may be implemented within one or more application specific integrated circuits (Application Specific Integrated Circuits, ASIC), digital signal processors (Digital Signal Processing, DSP), digital signal processing devices (DSP devices, DSPD), programmable logic devices (Programmable Logic Device, PLD), field programmable gate arrays (Field-Programmable Gate Array, FPGA), general purpose processors, controllers, microcontrollers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Optionally, as another embodiment, the processor 903 is further configured to perform the method of any of the preceding embodiments when the computer program is run.
Referring to fig. 10, a schematic diagram of the composition structure of another electronic device 90 according to an embodiment of the present application is shown. As shown in fig. 10, the electronic device 90 includes at least the cloud desktop distribution apparatus 80 according to any of the foregoing embodiments.
For the electronic device 90, the distribution coefficient is determined through the cloud desktop index data and the preset analysis model, and the initial distribution quantity of the cloud desktops is compensated by using the distribution coefficient, so that the final cloud desktop target distribution quantity is obtained, and therefore, when the cloud desktop is divided into the resource pools, the calculation power of the resource pools can be effectively exerted, and the waste of hardware resources is avoided.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the present application.
It should be noted that, in this application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
The methods disclosed in the several method embodiments provided in the present application may be arbitrarily combined without collision to obtain a new method embodiment.
The features disclosed in the several product embodiments provided in the present application may be combined arbitrarily without conflict to obtain new product embodiments.
The features disclosed in the several method or apparatus embodiments provided in the present application may be arbitrarily combined without conflict to obtain new method embodiments or apparatus embodiments.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. A cloud desktop distribution method, the method comprising:
acquiring cloud desktop index data and initial distribution quantity of cloud desktops;
carrying out predictive analysis on the cloud desktop index data by using a preset analysis model, and determining an allocation coefficient;
And compensating the initial distribution quantity of the cloud desktops by using the distribution coefficient, and determining the target distribution quantity of the cloud desktops.
2. The method of claim 1, wherein the cloud desktop index data comprises at least one of: bandwidth, carrier type, usage scenario, industry, desktop usage, and desktop usage preference.
3. The method according to claim 1, wherein the method further comprises:
acquiring a sample data set; the sample data set comprises N groups of sample data, each group of sample data comprises cloud desktop sample index data and corresponding sample distribution coefficients, and N is an integer greater than zero;
training a preset neural network by using the N groups of sample data to obtain the preset analysis model.
4. A method according to claim 3, wherein the acquiring a sample dataset comprises:
acquiring N groups of original sample data, wherein each group of original sample data comprises cloud desktop original index data and corresponding sample distribution coefficients;
in the first group of original sample data, carrying out data segmentation and extraction processing on the first cloud desktop original index data to obtain at least one numerical value data and at least one classification data; wherein the first set of raw sample data represents any one of the N sets of raw sample data, the first set of raw sample data including the first cloud desktop raw index data and a corresponding first sample distribution coefficient;
Carrying out standardization processing on the at least one numerical value data to obtain at least one standardized data;
converting the at least one classified data to obtain at least one flag variable;
performing splicing processing on the at least one standardized data, the at least one flag variable and the first sample distribution coefficient to obtain a first group of sample data;
after obtaining N sets of sample data, the sample data set is composed from the N sets of sample data.
5. The method of claim 3, wherein training the predetermined neural network using the N sets of sample data to obtain the predetermined analysis model comprises:
initializing the preset neural network to obtain an initial neural network;
correcting the initial neural network by using the kth group of sample data to obtain a corrected neural network; wherein k is an integer greater than zero and less than or equal to N;
calculating an error function of the corrected neural network;
if the error function meets a preset condition, determining the corrected neural network as the preset analysis model;
and if the error function does not meet the preset condition, executing 1 adding operation on the k, determining the corrected neural network as an initial neural network, and continuously executing the step of correcting the initial neural network by using the k group of sample data until the error function meets the preset condition.
6. The method of claim 3, wherein the pre-set neural network is a back propagation BP neural network having a three-layer structure; the three-layer structure comprises an input layer, a hidden layer and an output layer.
7. The method according to claim 1, wherein the method further comprises:
acquiring the CPU core number and the percentage of the CPU core number of the resource pool;
and determining the initial allocation quantity of the cloud desktop according to the CPU core number and the percentage.
8. The method according to any one of claims 1 to 7, wherein the compensating the initial allocation number of cloud desktops with the allocation coefficient, determining a target allocation number of cloud desktops, includes:
adding one to the distribution coefficient to obtain a compensation coefficient;
and multiplying the compensation coefficient and the initial distribution quantity of the cloud desktop to obtain the target distribution quantity of the cloud desktop.
9. The cloud desktop distribution device is characterized by comprising an acquisition unit, an analysis unit and a determination unit, wherein,
the acquisition unit is configured to acquire cloud desktop index data and initial allocation quantity of cloud desktops;
The analysis unit is configured to utilize a preset analysis model to conduct predictive analysis on the cloud desktop index data and determine distribution coefficients;
and the determining unit is configured to compensate the initial distribution quantity of the cloud desktops by using the distribution coefficient and determine the target distribution quantity of the cloud desktops.
10. An electronic device comprising a memory and a processor, wherein,
the memory is used for storing a computer program capable of running on the processor;
the processor is configured to perform the cloud desktop allocation method according to any one of claims 1 to 8 when the computer program is run.
11. A computer storage medium storing a computer program which, when executed by at least one processor, implements the cloud desktop allocation method of any of claims 1 to 8.
CN202111360151.4A 2021-11-17 2021-11-17 Cloud desktop distribution method, device, equipment and computer storage medium Pending CN116149764A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111360151.4A CN116149764A (en) 2021-11-17 2021-11-17 Cloud desktop distribution method, device, equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111360151.4A CN116149764A (en) 2021-11-17 2021-11-17 Cloud desktop distribution method, device, equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN116149764A true CN116149764A (en) 2023-05-23

Family

ID=86358644

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111360151.4A Pending CN116149764A (en) 2021-11-17 2021-11-17 Cloud desktop distribution method, device, equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN116149764A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118152078A (en) * 2024-05-10 2024-06-07 中移(苏州)软件技术有限公司 Cloud desktop service dynamic configuration method and device and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118152078A (en) * 2024-05-10 2024-06-07 中移(苏州)软件技术有限公司 Cloud desktop service dynamic configuration method and device and electronic equipment

Similar Documents

Publication Publication Date Title
WO2021063171A1 (en) Decision tree model training method, system, storage medium, and prediction method
US11907760B2 (en) Systems and methods of memory allocation for neural networks
US20190279088A1 (en) Training method, apparatus, chip, and system for neural network model
AU2020385264B2 (en) Fusing multimodal data using recurrent neural networks
CN108804641A (en) A kind of computational methods of text similarity, device, equipment and storage medium
US20210103858A1 (en) Method and system for model auto-selection using an ensemble of machine learning models
CN111582538B (en) Community value prediction method and system based on graph neural network
CN113128478B (en) Model training method, pedestrian analysis method, device, equipment and storage medium
CN111127364A (en) Image data enhancement strategy selection method and face recognition image data enhancement method
CN113220450A (en) Load prediction method, resource scheduling method and device for cloud-side multi-data center
CN111753995A (en) Local interpretable method based on gradient lifting tree
CN111178196B (en) Cell classification method, device and equipment
CN114492601A (en) Resource classification model training method and device, electronic equipment and storage medium
CN116149764A (en) Cloud desktop distribution method, device, equipment and computer storage medium
US12001174B2 (en) Determination of task automation using an artificial intelligence model
CN113706551A (en) Image segmentation method, device, equipment and storage medium
CN117034090A (en) Model parameter adjustment and model application methods, devices, equipment and media
CN112257958A (en) Power saturation load prediction method and device
CN116451081A (en) Data drift detection method, device, terminal and storage medium
CN110705889A (en) Enterprise screening method, device, equipment and storage medium
CN116958624A (en) Method, device, equipment, medium and program product for identifying appointed material
US20220292393A1 (en) Utilizing machine learning models to generate initiative plans
CN113657501A (en) Model adaptive training method, apparatus, device, medium, and program product
CN111950602A (en) Image indexing method based on random gradient descent and multi-example multi-label learning
CN112868048A (en) Image processing learning program, image processing program, information processing device, and image processing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination