WO2020118432A1 - Accès à un ensemble de données permettant la mise à jour de modèles d'apprentissage automatique - Google Patents

Accès à un ensemble de données permettant la mise à jour de modèles d'apprentissage automatique Download PDF

Info

Publication number
WO2020118432A1
WO2020118432A1 PCT/CA2019/051784 CA2019051784W WO2020118432A1 WO 2020118432 A1 WO2020118432 A1 WO 2020118432A1 CA 2019051784 W CA2019051784 W CA 2019051784W WO 2020118432 A1 WO2020118432 A1 WO 2020118432A1
Authority
WO
WIPO (PCT)
Prior art keywords
client
main server
client device
machine learning
updates
Prior art date
Application number
PCT/CA2019/051784
Other languages
English (en)
Inventor
Paul Gagnon
Misha BENJAMIN
Original Assignee
Element Ai Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Element Ai Inc. filed Critical Element Ai Inc.
Publication of WO2020118432A1 publication Critical patent/WO2020118432A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0283Price estimation or determination

Definitions

  • the present invention relates to resource allocation for computer servers. More specifically, the present invention relates to systems and methods for allocating resources for updating implementations of machine learning models as well as for executing client jobs.
  • the present invention provides systems and methods for managing resources for a main server that updates implementations of machine learning models used by client devices.
  • the main server allocates resources to jobs for client devices, including updates on machine learning models operating on or operated for client devices based on restrictions of use for data sets generated by the client devices.
  • restrictions or lack thereof as to the uses for the client’s data set
  • other bases for allocating resources to the jobs and needs of the client on the main server may be used.
  • the present invention provides a system for managing resources for at least one main server, said at least one main server being used for machine learning applications, the system comprising:
  • At least one resource allocation module for allocating resources of said at least one main server for use in service of said at least one client device, an allocation of said resources being based on at least one predetermined criterion
  • a model distribution module for distributing updates for at least one implementation of a machine learning model used by said at least one client device, said updates being distributed to said at least one client device;
  • model update module updates said at least one implementation based on resources allocated for said at least one client device.
  • the present invention provides a system for managing resources for at least one main server, said at least one main server being used for machine learning applications, the system comprising:
  • At least one parameter gathering module for gathering parameters regarding said at least one main server and at least one client device
  • a decision module for generating at least one decision regarding resource allocation based on said resource metrics and based on predetermined criteria
  • At least one resource allocation module for allocating resources of said at least one main server for use in service of said at least one client device, an allocation of said resources being based on said at least one decision generated by said decision module;
  • said at least one main server is for updating at least one implementation of a machine learning model that operates on data from said at least one client device.
  • FIGURE 1 is a block diagram of a system according to one aspect of the present invention.
  • FIGURE 2 is a block diagram of a system according to another aspect of the present invention.
  • the present invention relates to systems and methods for managing resources for at least one main server that is in communication with at least one client device.
  • a main server is one that updates/generates/trains implementations of machine learning models and distributes the generated/trained/updated
  • clients are devices that run the implementations of the machine learning models and may be servers themselves or they may be edge devices such as mobile devices, personal computers, or any other data processing device. If the clients are servers, these servers may run/operate the implementation of the machine learning model on behalf of or for edge devices such as those noted previously.
  • the main server operates to update the implementation of the machine learning model running on the clients. This update may involve a retraining of the model using one or more data sets from a specific client, from multiple clients, Attorney Docket No. 1355P032W001
  • the system of the present invention may only use data sets from that specific client to update the model running on that client. This ensures that the implementation of the model is specific to the data generated by that client.
  • the system may use the data from the specific client as well as data sets from other clients. The resulting updated model should then be applicable for use by clients other than the specific client.
  • the data sets used to update the model may only be from specific clients.
  • the clients may have a commonality suitable for the model. As an example, for a specific client, that client might only be interested in a model trained on or updated with data sets from clients dealing with retail sales. Thus, for this example, data sets from clients who deal with business to business transactions would not be used in updating the model.
  • the main server may thus have multiple versions of the same machine learning model, with each version being updated/trained using differing data sets.
  • the main server would ensure that the correct version of the model is transmitted to the correct client.
  • the main server would also operate to update the models on the various clients, but the clients may control the main server’s access to the data generated by the clients.
  • a client may allow the data it generates to be used in the updating of the model that the client currently deploys.
  • the client may also disallow the use of its data for updating the model for other clients (i.e. only the specific client’s version of the model would be updated with that specific client’s data).
  • This limitation may be implemented to address privacy concerns, business considerations, etc., etc.
  • the client may allow the main server to use that client’s data in the updating of one or more models being used by that client.
  • a client may allow the main server free rein in using that client’s data: the main server may thus use that client’s data to update any models, create other models, or other wise use the data for whatever uses the main Attorney Docket No. 1355P032W001
  • the server may see fit.
  • the client may also constrain the main server’s use of the data to protect its own privacy as well as the privacy of its users (or the privacy of those whose data may be included in the data set).
  • a client may receive an update as soon as the update is ready, and this client may be configured to receive as many updates as there are available.
  • a client may receive only one update a month (even if there are multiple updates in a month). Fixed delivery time periods other than monthly can, of course, be implemented.
  • a client may be entitled to only a fixed number of updates per calendar year. For this option, the client may need to manually pull or download the update from the main server as opposed to the main server pushing or sending the update to the client.
  • the client can no longer download updates unless a different arrangement is made with the operators of the main server.
  • Such limitations may be imposed on the client by the operators of the main server for various reasons including the potential need to reoptimize the main server’s configuration if the client downloads more updates, potential abuse of the system by the client, bandwidth efficiency concerns, etc., etc.
  • the access to the updates may be tied to the main server’s access to the client’s data.
  • a client may be entitled to receive as many updates as there are available if that client removes any restrictions on the use of its data sets.
  • the main server may configure its parameters so that that client would receive all updates.
  • the main server may be configured to only allow that client to access a limited amount or number of updates. For such
  • the general rule may be that increased access (or lack of restrictions) to a client’s data set would entitle the client to access to more updates.
  • the resources that may be allocated may include computing cycles, time executing jobs specific to the specific client, processing or computing power (e.g. a number of virtual machines operating on the main server dedicated to servicing the specific client’s needs), an amount of RAM allocated for jobs specific to the specific client, a number of processor cores dedicated to that specific client’s needs/jobs, an amount of virtual memory dedicated to that specific client’s needs/jobs, a priority given to the specific client’s jobs/processes, etc., etc.
  • the concept is that of tying the main server’s access to the client’s data to the amount of resources allocated to servicing that client’s needs.
  • updates to a model for the specific client may be given a higher priority on the main server if the specific client has minimal restrictions on the use of the data sets it generates.
  • processes on the main server that lead to updates to the model may only be run/executed for the specific client if that specific client has placed multiple restrictions on the uses for its data sets.
  • a client may be able to access more resources and/or more updates if the client’s owners/operators pay a premium to the operators/owners of the main server.
  • a sliding scale of premium payments may be used such that higher payments would entitle a client to more resources and/or more (and more frequent) updates.
  • This scheme can be combined with differing access/restrictions to the client’s data sets so that a client may offset higher premium payments with more data set use restrictions to still be able to access sufficient resources and/or updates. Similarly, a client may pay lower premium payments and offset those with lower data set use restrictions to be able to access sufficient resources and/or updates. Conversely, a client may pay high premium payments and not allow for any use of its data sets and still be able to Attorney Docket No. 1355P032W001
  • the scheme can also be extended to take into account the quality of the data horn the client as well as whether the main server actually needs or requires more data. Such an extension would have the main server determining whether the data horn the client is suitable for the main server’s needs and, depending on that determination, applying suitable adjustments to premium payments due horn the client. Or, in another variant, the main server may request specific types of data (with a suitably low data set restrictions) horn the client and, if the client accedes to the request, the main server would, in essence, reward the client with more access to resources.
  • the above scheme may be extended to include the concept of real-time pricing for access to computing resources and/or updates.
  • Such an extension of the above scheme would involve a real-time or near real-time monitoring of conditions involving the main server and any clients that would be affected. Based on the prevailing conditions, access to higher levels of resources and/or to more updates and/or to more frequent updates may require less restrictions on the client's data sets and/or higher premium payments.
  • the main server can check to see if such traffic can be pre-empted to upload the update to the client.
  • the main server would need to check the client's profile in a database to determine the restrictions (if any) on the client's data sets, the level of payments for the client (i.e. is the client on a high tier of payments necessitating a higher level of service), as well as the level of service to be provided to the client.
  • the main server may check the client's profile in the database to determine if other jobs running on the main server may be pre-empted by the model update or if the Attorney Docket No. 1355P032W001
  • model update may be placed further down the priority queue. If the model update is to be placed further down the queue, this means that other jobs may be placed ahead of the model update and the client may not receive a model update until a later time.
  • the client does not run the implementation of the machine learning model.
  • the main server may operate to run the model on behalf of the client.
  • the client would thus upload the data to be run through the model on the main server and, depending on the prevailing conditions, the predetermined service level associated with that client, and the availability of the main server's resources, the main server would run the data through the model and transmit the results to the client.
  • the scheduling as to when the data would be passed through the model on the main server may be determined by conditions and parameters such as how busy the main server is, the service level for that client, and the main server's resources available to be tasked to the client's needs.
  • the client's data set is freely available to the main server for use in updating models and if the client is at a high service level, then the data may be placed at a high priority level. If, on the other hand, the client's data cannot be used for updates and the client is at a lower service level (perhaps due to subscribing to a lower payment tier), the data may be placed at a lower priority level. Thus, jobs for a specific client and the priority assigned to such jobs may be affected by the main server's access to the client's data set as well as the payment plan/payment tier that the client subscribes to. Other possibilities may, of course, exist.
  • a client's jobs may be placed at a higher priority level if the prevailing conditions indicate that the main server is not busy but if the main server is busy, then that client's jobs may be placed at a lower priority. Or, conversely, a client's jobs may be placed at a high priority regardless of the prevailing conditions or a client's jobs may be at a priority level such that it cannot be pre-empted in its place in the queue for execution.
  • the frequency of model updates may be dependent on whether restrictions are placed on the use of the client's data set as well as the service level that the client is entitled to, given the client's payment tier.
  • the differences between the model on the main server and the model on the client may be what is exchanged. These differences, whether in the form of new weights, new parameters, updated links between nodes in the models, etc., etc., may be exchanged regardless of where the model has been updated.
  • the main server receives new data that is useful for its models and/or the processes it is executing, the main server can provide some form of advantage to the client that the client may not have had previously, or the main server may extend a period of time by which the client has an already existing advantage.
  • a form of compensation from the main server to the client, in exchange for the data useful to the main server can thus be made. As noted above, this compensation may take the form of a technical advantage such as faster processing, more access to resources, and/or more access to data/models.
  • the system 10 includes a data reception module 20, multiple resource allocation modules 30A, 30B, 30C, a model distribution module 40, and a model update module 50.
  • a database 60 may also be present to store client profiles.
  • the data reception module 20 receives data and data sets from client devices and processes these data sets accordingly.
  • the system checks the profile of the client that the data set came from and, depending on the contents and settings in that profile, the data sets are processed accordingly.
  • a data set coming from a client who has restricted the use of its data sets such that these data sets cannot be used for updating the model would be segregated from other data sets that would be used for model updates.
  • data sets from clients that allow their data sets to be used for model updates may have those data sets stored for these model updates.
  • the multiple resource allocation modules 30A, 30B, 30C would each be tasked with allocating different resources for specific clients.
  • the module 30A may be tasked with allocating processing cycles (i.e. processing time) to specific jobs for specific clients.
  • the system would, again, assess that client’s profile from the database 60 to determine the service level associated with that client based on the client’s setting for the main server’s access to the client’s data sets as well as the Attorney Docket No. 1355P032W001
  • module 30A may allocate processing cycles to the jobs for that specific client. If a data set has just been received from that client, then that data set may be used in an update to the model and the module 30A would allocate processing cycles to that model update process based on the client’s profile.
  • module 30B may allocate a priority to the jobs for that client. With the processing cycles allocated, the module 30B would then assign a priority to the model update job for that client and place the job in a suitable queue for execution. The priority would, again, depend on the client’s profile, including the client’s restrictions on the use of its data sets and its payment tier.
  • the third module 30C may be tasked with allocating processor cores to the jobs for clients.
  • the system may thus assign which processor cores to execute the model update. Again, this assignment of processor cores and how many cores to assign may be based on the client’s profile. Of course, jobs other than model updates may be executed for the client. In the above described variant, the system may actually execute the implementation of the machine learning model on the client’s data. Passing the client’s data through the model would be a job executed on behalf of the client and would be subject to the settings and decisions assigned by the resource allocation modules.
  • the system includes a model distribution module 40.
  • the updates to the models operating in the clients would be distributed by the module 40.
  • the schedule for the distribution as well as other parameters surrounding that distribution may be determined by module 40 based on the contents of the client’s profile as explained in the examples above.
  • a model update module 50 is also present for implementations where the main
  • This module 50 performs the update of the various models using the resources allocated by the resource allocation modules. Once a model has been updated, it can then be passed to the model distribution module.
  • the database 60 contains the profiles of the various clients.
  • the profiles may contain details of the limitations or restrictions placed by the client on its data sets.
  • the profile may contain the client’s payment tier, its associated service level, as well as other details relating to the client’s account.
  • the treatment of the client’s data as well as of the client’s version of the model being executed would, as explained above, depend on the entries in the client’s profile.
  • the system 100 includes multiple parameter gathering modules 110A, 110B, HOC, a calculation module 120, a decision module 130, and multiple resource allocation modules 140A, 140B, HOC.
  • a database 150 contains the various client profiles that would be used by the system in determining how to allocate resources.
  • the various parameter gathering modules 110A, 110B, 1 IOC modules of the system operate to gather current operating conditions for both the main server and the various clients being serviced by the main server. These parameters may include data traffic congestion both within the main server and between the main server and the various clients. As well, the parameters may include the number of jobs/tasks queued for execution on the main server, the amount of available memory, the processor utilization metrics for each processor core, as well as other metrics detailing the amount of activity in and for the main server. In addition, the parameter gathering modules gather data that serve as an indication of the amount of available resources to the main server.
  • the calculation module 120 determines the actual resources available to the main server based on the parameters gathered by the various modules 110A, 110B, and HOC. Attorney Docket No. 1355P032W001
  • the current conditions are determined by the calculation module and the decision module 130 determines how to allocate resources to the various jobs for the various clients. This is done by consulting the various client profiles stored in the database 150. The decision module would, based on predetermined criteria such as available resources, priority requirements, and others, determine which resources (and how much) are to be allocated to which job for which client. For real time or near real time“pricing” of resources, the current conditions can be taken into account when determining charges for a client’s access to resources such as processing time and data transmission bandwidth.
  • the system may be configured such that, generally, at times of resource scarcity, more will be charged for a client to access the scarce resource.
  • the system may also be configured such that, instead of currency, the clients would be charged in terms of access to their data sets - to access resources at times of resource scarcity, more access (i.e. less restrictions on their use by the main server) would be needed to the client’s data sets. This amount of access can, of course, be offset by an increase in the premium payments from the client.
  • the present invention may thus take the form of computer executable instructions that, when executed, implements various software modules with predefined functions.
  • 'audio files' refer to digital audio files, unless otherwise specified.
  • 'Video', 'video files', 'data objects', 'data files' and all other such terms should be taken to mean digital files and/or data objects, unless otherwise specified.
  • the embodiments of the invention may be executed by a computer processor or similar device programmed in the manner of method steps or may be executed by an electronic system which is provided with means for executing these steps.
  • an electronic memory means such as computer diskettes, CD-ROMs, Random Access Memory (RAM), Read Only Memory (ROM) or similar computer software storage media known in the art, may be programmed to execute such method steps.
  • electronic signals representing these method steps may also be transmitted via a communication network.
  • Embodiments of the invention may be implemented in any conventional computer programming language.
  • preferred embodiments may be implemented in a procedural programming language (e.g., "C” or “Go") or an object-oriented language (e.g., "C++", “java”, “PHP”, “PYTHON” or “C#”).
  • object-oriented language e.g., "C++", “java”, “PHP”, “PYTHON” or “C#”
  • Alternative embodiments of the invention may be implemented as pre-programmed hardware elements, other related components, or as a combination of hardware and software components.
  • Embodiments can be implemented as a computer program product for use with a computer system.
  • Such implementations may include a series of computer instructions fixed either on a tangible medium, such as a computer readable medium (e.g., a diskette, CD-ROM, ROM, or fixed disk) or transmittable to a computer system, via a modem or other interface device, such as a communications adapter connected to a network over a medium.
  • the medium may be either a tangible medium (e.g., optical or electrical communications lines) or a medium implemented with wireless techniques (e.g., microwave, infrared or other transmission techniques).
  • the series of computer instructions embodies all or part of the functionality previously described herein. Those skilled in the art should appreciate that such computer instructions can be written in a number of programming Attorney Docket No. 1355P032W001
  • Such instructions may be stored in any memory device, such as semiconductor, magnetic, optical or other memory devices, and may be transmitted using any communications technology, such as optical, infrared, microwave, or other transmission technologies.
  • a computer program product may be distributed as a removable medium with accompanying printed or electronic documentation (e.g., shrink-wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server over a network (e.g., the Internet or World Wide Web).
  • some embodiments of the invention may be implemented as a combination of both software (e.g., a computer program product) and hardware. Still other embodiments of the invention may be implemented as entirely hardware, or entirely software (e.g., a computer program product).

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Human Resources & Organizations (AREA)
  • General Business, Economics & Management (AREA)
  • Game Theory and Decision Science (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • General Engineering & Computer Science (AREA)
  • Educational Administration (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

L'invention concerne des systèmes et des procédés de gestion de ressources destinés à un serveur principal qui met à jour des mises en œuvre de modèles d'apprentissage automatique utilisés par des dispositifs clients. Le serveur principal attribue des ressources à des tâches destinées à des dispositifs clients, y compris des mises à jour sur des modèles d'apprentissage automatique fonctionnant sur des dispositifs clients ou exploités pour ces derniers en fonction de restrictions d'utilisation d'ensembles de données générés par les dispositifs clients. En plus des restrictions (ou de l'absence de restriction) concernant les utilisations pour l'ensemble de données du client, d'autres bases d'attribution de ressources aux tâches et aux besoins du client sur le serveur principal peuvent être utilisées.
PCT/CA2019/051784 2018-12-13 2019-12-11 Accès à un ensemble de données permettant la mise à jour de modèles d'apprentissage automatique WO2020118432A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862779175P 2018-12-13 2018-12-13
US62/779,175 2018-12-13

Publications (1)

Publication Number Publication Date
WO2020118432A1 true WO2020118432A1 (fr) 2020-06-18

Family

ID=71076685

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2019/051784 WO2020118432A1 (fr) 2018-12-13 2019-12-11 Accès à un ensemble de données permettant la mise à jour de modèles d'apprentissage automatique

Country Status (1)

Country Link
WO (1) WO2020118432A1 (fr)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006102122A2 (fr) * 2005-03-18 2006-09-28 Wink Technologies, Inc. Moteur de recherche a retroaction par les utilisateurs permettant d'ameliorer les resultats de recherche
US20140280065A1 (en) * 2013-03-13 2014-09-18 Salesforce.Com, Inc. Systems and methods for predictive query implementation and usage in a multi-tenant database system
US20170034023A1 (en) * 2015-07-27 2017-02-02 Datagrid Systems, Inc. Techniques for evaluating server system reliability, vulnerability and component compatibility using crowdsourced server and vulnerability data
US20170140262A1 (en) * 2012-03-09 2017-05-18 Nara Logics, Inc. Systems and methods for providing recommendations based on collaborative and/or content-based nodal interrelationships
US9721296B1 (en) * 2016-03-24 2017-08-01 Www.Trustscience.Com Inc. Learning an entity's trust model and risk tolerance to calculate a risk score
US20170364822A1 (en) * 2016-06-15 2017-12-21 Google Inc. Optimizing content distribution using a model
WO2018125264A1 (fr) * 2016-12-30 2018-07-05 Google Llc Évaluation de la précision d'un modèle d'apprentissage machine

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006102122A2 (fr) * 2005-03-18 2006-09-28 Wink Technologies, Inc. Moteur de recherche a retroaction par les utilisateurs permettant d'ameliorer les resultats de recherche
US20170140262A1 (en) * 2012-03-09 2017-05-18 Nara Logics, Inc. Systems and methods for providing recommendations based on collaborative and/or content-based nodal interrelationships
US20140280065A1 (en) * 2013-03-13 2014-09-18 Salesforce.Com, Inc. Systems and methods for predictive query implementation and usage in a multi-tenant database system
US20170034023A1 (en) * 2015-07-27 2017-02-02 Datagrid Systems, Inc. Techniques for evaluating server system reliability, vulnerability and component compatibility using crowdsourced server and vulnerability data
US9721296B1 (en) * 2016-03-24 2017-08-01 Www.Trustscience.Com Inc. Learning an entity's trust model and risk tolerance to calculate a risk score
US20170364822A1 (en) * 2016-06-15 2017-12-21 Google Inc. Optimizing content distribution using a model
WO2018125264A1 (fr) * 2016-12-30 2018-07-05 Google Llc Évaluation de la précision d'un modèle d'apprentissage machine

Similar Documents

Publication Publication Date Title
CN110858161B (zh) 资源分配方法、装置、***、设备和介质
CN109324900B (zh) 用于按需服务环境中的消息队列的基于竞价的资源共享
US8732310B2 (en) Policy-driven capacity management in resource provisioning environments
US8856797B1 (en) Reactive auto-scaling of capacity
US8612615B2 (en) Systems and methods for identifying usage histories for producing optimized cloud utilization
CN112165691B (zh) 内容分发网络调度方法、装置、服务器和介质
US10616139B1 (en) Reducing quota access
US9112809B2 (en) Method and apparatus for controlling utilization in a horizontally scaled software application
CN107003887A (zh) Cpu超载设置和云计算工作负荷调度机构
CN104243405B (zh) 一种请求处理方法、装置及***
US20160142323A1 (en) Systems and/or methods for resource use limitation in a cloud environment
US20030028642A1 (en) Managing server resources for hosted applications
WO2019184445A1 (fr) Attribution de ressources de service
CN103069406A (zh) 对于多个客户机管理流媒体带宽
CN115421930B (zh) 任务处理方法、***、装置、设备及计算机可读存储介质
WO2019201319A1 (fr) Système et procédé d'utilisation de monnaie numérique dans un réseau de communication
CN110727511B (zh) 应用程序的控制方法、网络侧设备和计算机可读存储介质
CN117135130A (zh) 服务器控制方法、装置、电子设备及存储介质
WO2020118432A1 (fr) Accès à un ensemble de données permettant la mise à jour de modèles d'apprentissage automatique
US11755379B2 (en) Liaison system and method for cloud computing environment
CN110442455A (zh) 一种数据处理方法及装置
Xu A novel machine learning-based framework for channel bandwidth allocation and optimization in distributed computing environments
WO2020166617A1 (fr) Appareil d'arbitrage de contention de ressources, procédé d'arbitrage de contention de ressources, et programme
Edward Gerald et al. A fruitfly-based optimal resource sharing and load balancing for the better cloud services
JP6815975B2 (ja) Api管理システムおよびapi管理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19895238

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23.09.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19895238

Country of ref document: EP

Kind code of ref document: A1