CN115114034A - Distributed computing method and device - Google Patents

Distributed computing method and device Download PDF

Info

Publication number
CN115114034A
CN115114034A CN202211050556.2A CN202211050556A CN115114034A CN 115114034 A CN115114034 A CN 115114034A CN 202211050556 A CN202211050556 A CN 202211050556A CN 115114034 A CN115114034 A CN 115114034A
Authority
CN
China
Prior art keywords
computing
task
entity
node
processing capacity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211050556.2A
Other languages
Chinese (zh)
Inventor
卢放
张贵海
武磊之
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lantu Automobile Technology Co Ltd
Original Assignee
Lantu Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lantu Automobile Technology Co Ltd filed Critical Lantu Automobile Technology Co Ltd
Priority to CN202211050556.2A priority Critical patent/CN115114034A/en
Publication of CN115114034A publication Critical patent/CN115114034A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a distributed computing method and a distributed computing device. The method comprises the steps that the computing entity is connected with at least one computing node in a communication network; the computing entity obtains the current processing capacity of the computing node on at least one computing task; the computing entity distributing the whole or part of the computing task to at least one computing node according to the processing capacity; the computing node executes the computing task and sends a computing result of the computing task to the computing entity. The computing entity distributes computing tasks to the computing nodes according to the processing capacity, distributed computing of driving, perception and interactive data is achieved, and functions of driving, interaction and the like of the intelligent automobile are supported.

Description

Distributed computing method and device
Technical Field
The invention relates to the field of data processing, in particular to a distributed computing method and device.
Background
With the development of intelligent automobiles, the data of driving, perception and interaction of the intelligent automobiles are increased, and the calculation load of the data is increased. The vehicle-mounted computer terminal of the general intelligent automobile is difficult to provide accurate and/or timely calculation support.
Disclosure of Invention
The embodiment of the invention at least overcomes the problems in the prior art, and the computing entity is used for distributing the computing tasks to the computing nodes according to the processing capacity, so that the distributed computing of the driving, perception and interactive data is realized, and the functions of driving, interaction and the like of the intelligent automobile are supported.
The first aspect of the embodiments of the present invention provides at least a distributed computing method. The method is implemented at least by connecting a computing entity to at least one computing node in a communications network; the computing entity obtains the current processing capacity of the computing node on at least one computing task; the computing entity distributing the whole or part of the computing task to at least one computing node according to the processing capacity; the computing node executes the computing task and sends a computing result of the computing task to the computing entity.
In the disclosed embodiment of the present invention, the computing entity allocates the computing task to be configured such that the computing entity sends at least one processing request to at least one of the computing nodes; the computing node sends the current resource utilization degree to the computing entity according to the processing request; the computing entity obtains the current processing capacity of the computing node according to the resource utilization degree; and the computing entity distributes the computing task to at least one computing node according to the current processing capacity.
In the embodiment disclosed by the invention, when the computing node judges that the resource utilization is greater than or equal to a utilization threshold value, the computing node sends the waiting information to the computing entity; the computing entity allocates all or part of the computing task to at least one of the computing nodes based on current processing power and latency information.
In the embodiment disclosed by the invention, the computing node configures the use degree threshold according to the task type and the computing requirement of the computing task.
In a disclosed embodiment of the invention, the compute node limits consumption of the resource usage in accordance with the processing request.
In the embodiments disclosed in the present invention, the computing node limits the consumption of the resource utilization according to the task type and the computing requirement of each computing task.
In the embodiment disclosed by the invention, the computing entity obtains the loss of the processing capacity of the computing node on the computing task according to the resource utilization; the computing entity allocates all or part of the computing task according to the waiting information and the current processing capacity loss.
In the embodiment disclosed by the invention, the computing entity obtains the loss of the processing capacity of the computing node on the computing task according to the resource utilization; the computing entity allocates all or part of the computing task according to the waiting information and the current processing capacity loss.
In an embodiment disclosed in the present invention, the computing entity obtains a resource-capability evaluation model of the computing node facing the computing task, and the computing entity obtains a configuration of the loss of processing capability; and the computing entity predicts the loss of the resource utilization to the processing capacity of the computing task according to the resource-capacity evaluation model.
In the embodiment disclosed in the present invention, the configuration of the resource-capability evaluation model obtained by the computing entity is such that, when the computing entity is initially connected to the computing node, the computing entity initializes the resource-capability evaluation model; the computing entity sends at least one test task to the computing node; the computing node keeps executing the test tasks respectively under at least two different test degrees, and sends test results of the test tasks under each test degree; the computing entity obtains a sample group consisting of the test result and the related test utilization, and creates the resource-ability evaluation model according to at least two sample groups.
In the embodiment disclosed in the present invention, the test utilization includes at least two of processor utilization, graphics processor utilization, memory utilization, and hard disk utilization.
In the embodiment disclosed in the present invention, the sample group assigns a usability weight to at least one of the processor usability, the graphics processor usability, the memory usability, and the hard disk usability according to the task type and the calculation requirement of the test task.
In the embodiment disclosed by the invention, the test result comprises the accuracy of executing the test task and the time for testing.
In the embodiment disclosed by the invention, the computing entity acquires the processing time of the computing node on at least one computing task; the computing entity acquires at least one computing node with processing time less than or equal to a time threshold value as a real-time node; and the computing task distributes the computing task to the real-time node according to a processing result.
A second aspect of the embodiments of the present invention at least provides a distributed computing method. The method at least comprises the steps of connecting with at least one computing node in a communication network; acquiring the current processing capacity of the computing node on at least one computing task; distributing the computing task to at least one computing node according to the processing capacity; and receiving a calculation result of the calculation node on the calculation task.
A third aspect of the embodiments of the present invention provides at least a distributed computing method. The method is implemented at least by connecting to at least one computing entity in a communication network; sending a current processing capacity for at least one computing task to the computing entity; receiving the computing tasks distributed by the computing entities according to the processing capacity; executing the computing task; and sending the calculation result of the calculation task to the calculation entity.
A fourth aspect of the embodiments of the present invention at least provides a distributed computing method. The apparatus comprises the computing entity and at least one of the computing nodes.
A fifth aspect of the embodiments of the present invention provides at least a distributed computing method. The apparatus includes at least one of the computing entities.
A sixth aspect of the embodiments of the present invention provides at least a distributed computing method. The apparatus comprises at least one of the computing nodes.
Compared with the prior art, the computing entity distributes the computing task to the at least one computing node, so that the at least one computing entity and the plurality of computing nodes form computing power cooperation, computing power cost and computing load of the intelligent automobile are reduced, accuracy of data processing of the intelligent automobile is improved, and good use experience is brought to users when the intelligent automobile is used.
In view of the above, other features and advantages of the disclosed exemplary embodiments will become apparent from the following detailed description of the disclosed exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
FIG. 1 is a network topology diagram of a computing entity and a computing node to which the method of the present embodiment is applied;
FIG. 2 is a flowchart illustrating a method for a computing entity to obtain processing capabilities of a computing node according to the present embodiment;
FIG. 3 is a flowchart illustrating the distribution of computing tasks by computing entities in the method of the present embodiment;
FIG. 4 is a flowchart illustrating the configuration of the resource-capability evaluation model in the method according to the present embodiment;
FIG. 5 is a flowchart illustrating a method for a computing entity to obtain processing power and loss of a computing node according to the present embodiment;
FIG. 6 is a flowchart illustrating a computing entity distributing computing tasks according to processing power and loss in the method of the present embodiment;
fig. 7 is a structural diagram of an electronic device configured as a computing entity and a computing node according to the embodiment.
Detailed Description
The embodiment of the invention provides a distributed computing method. The method is applied to a communicable communication network. A computing entity and three main computing nodes are deployed in the communication network, and the computing entity selectively distributes tasks according to the processing capacity of each computing node.
FIG. 1 illustrates a communication network including a computing entity and three computing nodes. The communication network is formed in a vehicle, which is generally a vehicle driven by consumable fuel or electricity, which is provided with at least a passenger compartment that can be covered by the communication network. In this application, reference to a "computing entity" and/or a "computing node" is a device or system that has data retained in registers or memory, that has a processor to process the retained data, and that is capable of electronic communication.
For example, each of the computing entities and/or computing nodes may access the respective stored data and may conduct communications with one or more other entities and/or nodes. This type of communication is provided by local and/or cloud services. It may be implemented that the computing entity and any computing node may be within the same wired link or wireless network.
As an example, the computing entity may be an on-board computer terminal connected at least with devices used by the vehicle for obtaining perception and/or interaction. The perception acquired by the equipment can be external perception relating to driving scenes such as vision, radar and the like, and can also be data perception relating to vehicle states such as speed, course angle and the like; the interaction acquired by the device can be data interaction related to information transfer, such as input video, audio and the like of a driver and passengers. The computing nodes may be mobile telephones or specialized devices deployed within the vehicle for providing processing capability, the specialized devices being at least accessible to the communication network and implementing the provision of processing capability in communication with the vehicle-mounted computer terminal over the communication network.
The computing entity shown in connection with fig. 1 is an on-board computer terminal, and the three computing nodes are a first mobile phone of a driver, a second mobile phone of a passenger, and a mobile deployed dedicated device, respectively. The vehicle-mounted computer terminal is provided with an access point for wirelessly accessing a communication network, and the first mobile phone, the second mobile phone and the special equipment respectively enter the communication network through the access point.
The on-board computer terminal obtains at least data of perception and/or interaction with the vehicle. These sensed and/or interacted data need to be processed by the vehicle-mounted computer terminal into valid data that can be stored and provide support for business functions. For example, processing original video frames to realize the marking of roads and vehicles in the video; and processing the audio input of the driver to realize the identification of the input instruction of the driver. And the on-board computer terminal allocating tasks to the communication network to process the data of perception and/or interaction, the allocated tasks being up to one or more of the first mobile phone, the second mobile phone and the specialized device being pointed to.
The computing entities rely on the processing power expected by each computing node when allocating tasks. The processing power expectations of the compute nodes are obtained from at least one test task. FIG. 2 illustrates the steps of a computing entity obtaining the processing power of each computing node.
S101, the computing entity issues at least one test task to each computing node.
The test task refers to a purposeful processing task for data, and each computing node returns a test result to a computing entity according to the execution of the test task. The test tasks may be processing tasks for a single data type or processing tasks for a composite data type.
For example, the extraction of the lane route from the image data and the dynamic capture of the vehicle ahead.
Generally, a test task is a processing task of a single data type; the task processing of the composite data type can be considered to be preferentially disassembled into a plurality of test tasks.
For example, the image data is split into multi-frame image data and audio data, at least one test task is used for extracting a road line in the video data, and at least one test task is used for acquiring audio information in the audio data.
And S102, each computing node executes respective testing task and returns the testing result of each testing task to the computing entity.
The test result returned by the computing entity optionally includes accuracy and test time.
S103, the computing entity evaluates the processing capacity of each computing node on each test task according to the test result, and marks the processing capacity of each computing node on each test task.
The evaluation of the processing capacity of each computing node by the computing entity comprises the accuracy of executing the test task on each computing node and the judgment of the test time.
For example, the test task is to extract a lane line in video data, and calculate an extraction result of the lane line by a node. The calculation entity stores the test result of the lane line in the video data in advance, and compares the difference between the test result and the extraction result so as to evaluate the accuracy degree and the processing time of the test task executed by the calculation node.
Furthermore, according to the timeliness requirement for processing perception and/or interaction data of each vehicle, the processing capacity of each computing node for each test task can be further marked as a real-time node.
For example, when the vehicle is kept in the cruise function, the current lane, following distance, vehicle speed, attitude, and the like of the following vehicle in the video data need to be extracted within a limited time to ensure dynamic capture of the following vehicle and implement planning and control of the cruise of the own vehicle.
Then in step S103, the computing entity may serve as a real-time node for processing the real-time task when each computing node is marked according to the testing time of the testing task executed by each computing node.
And after acquiring the processing capacity of each computing node on different test tasks, the computing entity monitors the network state of each computing node in the communication network in real time. The computing entity re-performs steps S101 to S103 in real time upon acquiring that any computing node is reconnected to the communication network, or that the communication network with the computing node is unstable.
When the computing entity obtains the perception and/or interaction computing tasks of the vehicle, the processing capacity of the testing tasks of the same task type and the same computing requirement of each computing node and the computing tasks is obtained firstly, and then the computing tasks are distributed to the adaptive computing nodes according to the processing capacity. FIG. 3 illustrates the steps of a computing entity assigning computing tasks.
S201, a computing entity acquires a computing task and analyzes the accuracy requirement and the timeliness requirement of the computing task.
S202, the computing entity traverses each computing node and matches at least one computing node meeting the processing capacity such as accuracy requirements, timeliness requirements and the like.
S203, if the computing nodes meeting the processing capacity are matched, the computing entity distributes computing tasks to the computing nodes and waits for the returned computing results.
Preferably, if there are multiple computing nodes that satisfy processing power, then the multiple computing nodes are ranked to select the computing node that is ranked first. The ordering of the plurality of compute nodes by the compute entity may be dependent on the accuracy of the compute task and/or the time spent testing.
For example, the pedestrian and vehicle in the video data are extracted, and the extracted test time is used as a sequencing reference. And extracting the road side mark in the video data by taking the extraction accuracy as a sequencing reference.
If the computing nodes with processing capacity are not matched, the computing task is disassembled into a plurality of branch tasks, and the computing entity respectively implements the steps S201 to S203 for each branch task.
S204, the computing node returns the computing result of the computing task to the computing entity.
Therefore, in the embodiment, the computing entity can obtain the processing capability of each computing node for different test tasks based on the test tasks previously allocated to each computing node. The computing entity selects a computing node according to the processing capability when acquiring the computing task, and allocates the computing task to the selected computing node.
The computing entities may consider the loss of each computing node in performing the computing task when distributing the computing task. Specifically, the calculation result of the calculation task by the calculation node is limited by the current resource utilization of the calculation node. For example, when a mobile phone performs a computing task, the usage of processor, graphics processor, memory, and hard disk space may affect the accuracy and/or computing time. For example, a mobile phone may have several service processes running in the background, which may take up at least part of the memory and graphics processor usage.
Therefore, when allocating a computing task, a computing entity needs to consider the influence of the current resource utilization of each computing node on the respective processing capability, i.e., the loss of the processing capability.
Preferably, the computing entities allocate tasks according to expected processing capacities of the computing nodes at different resource usages. The processing power expectations of each computing node at different resource usage are directly obtained or predicted from at least one test task. FIG. 4 illustrates the steps of a computing entity obtaining the processing power of each computing node.
S301, the computing entity issues at least one test task to each computing node, wherein the test task is required to be executed under at least two different test degrees.
S302, each computing node executes each test task and returns the test result of each test task under at least two different test degrees.
And S303, the computing entity evaluates the processing capacity of each computing node for each test task under each test utilization according to each test result, and marks the processing capacity of each computing node for each test task under each resource utilization.
S304, the computing entity creates a resource-capability evaluation model according to the processing capability of the testing task under a plurality of testing degrees.
The resource-capability evaluation model is used for predicting the processing capability change of a computing entity under each test degree when executing a test task.
There are various ways to build the resource-ability evaluation model.
For example, the test metrics may be quantized as a whole to a first dimension, such as a percentage value, and the processing power may be quantized to a second dimension, such as a percentage value. Different percentages of test occupancy are associated with different percentages of processing power. Then, by means of polynomial fitting, a resource-capability evaluation model of a polynomial can be created from the first sample group of test-use-processing-capability acquired at the finite number of S101 to S103.
For example, the test level is broken down into several resource items, such as processor level, graphics processor level, memory level, and hard disk level. An array of items of test usage is created. The array of entries and the processing power form a second sample set. Then a resource-capability evaluation model based on the neural network prediction model may be created from the second sample sets by means of neural network model prediction.
As an example, the resource-capability evaluation model is a neural network model. FIG. 5 shows the steps of a computational entity configuring a neural network model.
S401, when a computing entity is initially connected with a computing node in a communication network, the computing entity initializes a resource-capability evaluation model.
S402, the computing entity sends at least one test task to the computing node.
Test tasks are typical, with different test tasks being represented as task types and computational requirements that need to be processed separately.
And 403, the computing node respectively executes each test task under at least two different test degrees, and returns the test result of the test task under each test degree.
The test result comprises the accuracy of task processing and the time for testing. Each computing node, such as a mobile phone, may have different processing capabilities when oriented to different test tasks due to different software and hardware configurations.
S404, the computing entity stores the test result of the computing node to each test task under each test degree.
S405, the computing entity obtains the test results of at least two test degrees of the computing node under a test task, and creates a second sample group according to the test results and the associated test degrees.
S406, the computing entity obtains a plurality of second sample groups of the test task, and trains an initialized resource-capability evaluation model according to the second sample groups.
In the implementation step S401, the resource-ability evaluation model is a BP neural network model.
As an example, the BP neural network model includes an input layer, a hidden layer, and an output layer. The number of the nodes of the input layer is the same as the number of the resource items with the resource utilization degree capable of being disassembled. The number of the nodes of the output layer is two, and the two nodes respectively correspond to the accuracy of the expression processing capacity and the processing time. Hidden layerThe number of the nodes is m,
Figure 638389DEST_PATH_IMAGE001
the number of nodes of the input layer is; l is the number of hidden layer nodes; m is the number of nodes of the output layer; a is a constant between 0 and 10, and the value of a is generally 3. The activation function is preferably a Sigmoid function, wherein the Sigmoid function is 1/(1+ e-xj), and xj is the sum of the weight and the sum constant term.
Optionally, in each resource item corresponding to the input layer, weighting with a weight is performed on the input quantity of one or more input layer nodes according to the task type and the calculation requirement of the test task.
When a computing entity establishes a resource-ability evaluation model and acquires a computing task of perception and/or interaction data of a vehicle, the computing entity judges the loss of processing ability of each computing node according to the current resource utilization of each computing node and distributes the computing task to each computing node according to the surplus of the processing ability. FIG. 6 illustrates the steps of a computing entity assigning computing tasks.
S501, the computing entity issues at least one test task which is required to be executed under at least two different test degrees to each computing node.
S502, the computing entity sends processing requests to each computing node respectively.
S503, each computing node returns the resource utilization degree and the waiting information of the computing node to the computing entity according to the processing request.
Optionally, the computing node may implement a limitation on its resource usage when receiving the processing request, so that the computing node may reserve at least part of the resource usage for waiting for allocation and execution of the computing task. For example, at least a portion of processor and memory usage is reserved for caching data for computing tasks and for performing processing requirements on the data. In the implementation step S502, the processing request sent by the computing entity is covered with the task type, the computing requirement, the data amount, and the like of the computing task. In step S503, the computing node reserves each resource item for the resource according to the task type, the computing requirement, the data amount, and the like of the computing task.
Preferably, the computing node may be configured to determine whether to respond to the processing request in the implementation step S502. The computing node determines whether each resource item is greater than or equal to a respective usage threshold. The compute node will only return wait information to the compute entity if each resource item is greater than or equal to a respective usage threshold. The meaning that the computing node returns the wait information means that at least the processing power of the most basic computing task is covered.
Furthermore, the computing nodes dynamically configure the use threshold according to the task type and the computing requirement of the computing task.
S504, the computing entity receives the resource utilization and the waiting information of each computing node, and judges the processing capacity loss of each computing node under the current resource utilization according to the resource utilization and the resource-capacity evaluation model.
And S505, the computing entity judges whether the residual processing capacity of each computing node can meet the requirements on the accuracy and the timeliness of the computing task.
S506, the computing entity distributes computing tasks to the computing nodes meeting the requirements.
S507, the computing node returns the execution result of the computing task to the computing entity.
Based on this, the computing entity and the computing node implemented by the method of the present embodiment implement distributed computing on at least one computing task in the communication network. Specifically, the method is particularly applicable to a computer terminal and a plurality of mobile phones in a vehicle passenger compartment, and directional task processing is performed on sound, video, radar data and the like of a vehicle by using the processing capacity provided by the mobile phones. Optionally, at least one specialized device dedicated to participating in distributed computing may be deployed within the passenger compartment. Specialized equipment is used to participate in distributed computing when the number of mobile phones is limited and the computing task is demanding in number.
In step S506, if there are no compute nodes that satisfy the requirement, the computing entity may split the compute task into at least two branch tasks. The computing entity performs steps S501 to S507 separately for each branch task.
The embodiment of the invention provides a distributed computing method. The method is applied to a computing entity in a communication network. The computing entity may be in communication with a number of computing nodes deployed within a communication network, and at least connected with a device for obtaining awareness and/or interaction. These perceptive and/or interactive data needs to be processed by the computing entity into valid data that can be stored and provide support for business functions. And the computing entity distributes computing tasks to the computing nodes according to the processing capacity of the computing nodes in order to process the data.
For example, the computing entity may be a vehicle-mounted computer terminal, preferably a meter, a vehicle machine.
The embodiment of the invention provides a distributed computing method. The method is applied to a computing node in a communication network. Computing entities have been deployed in such communication networks. The computing node can actively or passively send the processing capacity of the computing task to the computing entity, accept and execute the computing task from the computing entity.
For example, the computing node may be a mobile telephone, a specialized device for a vehicle having data storage and processing capabilities and being capable of accessing a communication network, such as a laptop on-board computer, a headrest on-board computer, an entertainment system.
The embodiment of the invention provides a distributed computing method. The method is applied to a plurality of computing entities in a communication network. Each computing entity may communicate with all computing entities deployed within a communication network. Each computing entity may be connected to some of the devices for obtaining perception and/or interaction, with the devices associated with each computing entity being different. Any computing entity may distribute computing tasks to other computing entities according to processing capacity, and each computing entity informs the other computing entities of its own processing capacity for various types of computing tasks.
For example, the plurality of computing entities are respectively meters, car machines, and the like deployed in the vehicle.
The embodiment of the invention provides a distributed computing device. The apparatus includes at least one computing entity and at least one computing node. The computing entity is used for acquiring the computing tasks and distributing the computing tasks to the computing nodes according to the processing capacity, and the computing nodes are used for implementing each computing task and returning the execution result of the computing tasks.
The embodiment of the invention provides a distributed computing device. The apparatus includes at least one computing entity. The computing entity is used for acquiring the computing tasks and distributing the computing tasks to the computing nodes according to the processing capacity, and the computing nodes are used for implementing each computing task and returning the execution result of the computing tasks.
The embodiment of the invention provides a distributed computing device. The apparatus includes at least one compute node. Computing entities have been deployed in such communication networks. The computing node can actively or passively send the processing capacity of the computing task to the computing entity, accept and execute the computing task from the computing entity.
Fig. 6 is a block diagram of an electronic device according to an embodiment of the present invention. The electronic device is configured to act as a computing entity or a computing node. The electronic device includes: memory, processors and system buses, the memory containing a stored executable program, it being understood by those skilled in the art that the electronic device structures shown in the figures do not constitute limitations on the electronic device, and may include more or fewer components than those shown, or some components in combination, or a different arrangement of components.
It should be noted that, in the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to relevant descriptions of other embodiments for parts that are not described in detail in a certain embodiment.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A distributed computing method, characterized in that,
the method at least the step of performing comprises,
the computing entity is connected with at least one computing node in a communication network;
the computing entity obtains the current processing capacity of the computing node on at least one computing task;
the computing entity distributing the whole or part of the computing task to at least one computing node according to the processing capacity;
the computing node executes the computing task and sends a computing result of the computing task to the computing entity.
2. The distributed computing method of claim 1,
the computing entity assigns the computing task configuration to,
the computing entity sends at least one processing request to at least one computing node;
the computing node sends the current resource utilization degree to the computing entity according to the processing request;
the computing entity obtains the current processing capacity of the computing node according to the resource utilization degree;
the computing entity distributes the computing task to at least one computing node according to the current processing capacity.
3. The distributed computing method of claim 2,
the computing node sends the waiting information to the computing entity when judging that the resource utilization is greater than or equal to a utilization threshold;
the computing entity allocates all or part of the computing task to at least one of the computing nodes based on current processing power and latency information.
4. The distributed computing method of claim 3,
and the computing node configures the use degree threshold according to the task type and the computing requirement of the computing task.
5. The distributed computing method of claim 2,
the computing entity obtains the loss of the processing capacity of the computing node to the computing task according to the resource utilization;
the computing entity allocates all or part of the computing task according to the waiting information and the current processing capacity loss.
6. The distributed computing method of claim 5,
the computing entity obtains a resource-capability evaluation model of the computing node facing the computing task;
and the computing entity predicts the loss of the resource utilization to the processing capacity of the computing task according to the resource-capacity evaluation model.
7. The distributed computing method of claim 1,
the computing entity obtains the processing time of the computing node on at least one computing task;
the computing entity acquires at least one computing node with processing time less than or equal to a time threshold value as a real-time node;
and the computing task distributes the computing task to the real-time node according to a processing result.
8. A distributed computing method, characterized in that,
the method at least the step of performing comprises,
connecting with at least one computing node in a communication network;
acquiring the current processing capacity of the computing node on at least one computing task;
distributing the computing task to at least one computing node according to the processing capacity;
and receiving a calculation result of the calculation node on the calculation task.
9. A distributed computing method, characterized in that,
the method at least the step of performing comprises,
connecting to at least one computing entity in a communication network;
sending the current processing capacity of at least one computing task to the computing entity;
receiving the computing tasks distributed by the computing entities according to the processing capacity;
executing the computing task;
and sending the calculation result of the calculation task to the calculation entity.
10. A distributed computing apparatus comprising the computing entity and/or at least one of the computing nodes of any one of claims 1 to 9.
CN202211050556.2A 2022-08-29 2022-08-29 Distributed computing method and device Pending CN115114034A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211050556.2A CN115114034A (en) 2022-08-29 2022-08-29 Distributed computing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211050556.2A CN115114034A (en) 2022-08-29 2022-08-29 Distributed computing method and device

Publications (1)

Publication Number Publication Date
CN115114034A true CN115114034A (en) 2022-09-27

Family

ID=83335994

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211050556.2A Pending CN115114034A (en) 2022-08-29 2022-08-29 Distributed computing method and device

Country Status (1)

Country Link
CN (1) CN115114034A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106095586A (en) * 2016-06-23 2016-11-09 东软集团股份有限公司 A kind of method for allocating tasks, Apparatus and system
CN109597685A (en) * 2018-09-30 2019-04-09 阿里巴巴集团控股有限公司 Method for allocating tasks, device and server
CN110866167A (en) * 2019-11-14 2020-03-06 北京知道创宇信息技术股份有限公司 Task allocation method, device, server and storage medium
CN111124687A (en) * 2019-12-30 2020-05-08 浪潮电子信息产业股份有限公司 CPU resource reservation method, device and related equipment
CN111399970A (en) * 2019-01-02 2020-07-10 ***通信有限公司研究院 Reserved resource management method, device and storage medium
CN111949394A (en) * 2020-07-16 2020-11-17 广州玖的数码科技有限公司 Method, system and storage medium for sharing computing power resource
CN112527490A (en) * 2019-09-17 2021-03-19 广州虎牙科技有限公司 Node resource control method and device, electronic equipment and storage medium
CN113656158A (en) * 2021-08-12 2021-11-16 深圳市商汤科技有限公司 Task allocation method, device, equipment and storage medium
CN114253728A (en) * 2021-12-23 2022-03-29 上海交通大学 Heterogeneous multi-node cooperative distributed neural network deployment system based on webpage ecology

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106095586A (en) * 2016-06-23 2016-11-09 东软集团股份有限公司 A kind of method for allocating tasks, Apparatus and system
CN109597685A (en) * 2018-09-30 2019-04-09 阿里巴巴集团控股有限公司 Method for allocating tasks, device and server
CN111399970A (en) * 2019-01-02 2020-07-10 ***通信有限公司研究院 Reserved resource management method, device and storage medium
CN112527490A (en) * 2019-09-17 2021-03-19 广州虎牙科技有限公司 Node resource control method and device, electronic equipment and storage medium
CN110866167A (en) * 2019-11-14 2020-03-06 北京知道创宇信息技术股份有限公司 Task allocation method, device, server and storage medium
CN111124687A (en) * 2019-12-30 2020-05-08 浪潮电子信息产业股份有限公司 CPU resource reservation method, device and related equipment
CN111949394A (en) * 2020-07-16 2020-11-17 广州玖的数码科技有限公司 Method, system and storage medium for sharing computing power resource
CN113656158A (en) * 2021-08-12 2021-11-16 深圳市商汤科技有限公司 Task allocation method, device, equipment and storage medium
CN114253728A (en) * 2021-12-23 2022-03-29 上海交通大学 Heterogeneous multi-node cooperative distributed neural network deployment system based on webpage ecology

Similar Documents

Publication Publication Date Title
CN110046953B (en) Rental method and device for shared automobile
CN109343946B (en) Software-defined Internet of vehicles computing task migration and scheduling method
CN108777852A (en) A kind of car networking content edge discharging method, mobile resources distribution system
EP3457664A1 (en) Method and system for finding a next edge cloud for a mobile user
CN111083634B (en) CDN and MEC-based vehicle networking mobility management method
US11861407B2 (en) Method for managing computing capacities in a network with mobile participants
CN107203824B (en) Car pooling order distribution method and device
CN106327311B (en) Order processing method, device and system
CN110633815A (en) Car pooling method and device, electronic equipment and storage medium
CN111860853B (en) Online prediction system, device, method and electronic device
JP2022050380A (en) Method for monitoring vehicle, apparatus, electronic device, storage medium, computer program, and cloud control platform
CN107798420B (en) Information display method and device and electronic equipment
CN109981473A (en) A kind of real-time messages bus system
CN105119914A (en) A resource processing method and apparatus, a terminal and a server
Ali et al. Priority-based cloud computing architecture for multimedia-enabled heterogeneous vehicular users
CN112887401B (en) Network access method based on multiple operating systems and vehicle machine system
CN115114034A (en) Distributed computing method and device
CN116915869A (en) Cloud edge cooperation-based time delay sensitive intelligent service quick response method
CN111294762A (en) Vehicle business processing method based on radio access network RAN slice cooperation
CN108770014B (en) Calculation evaluation method, system and device of network server and readable storage medium
CN111480349B (en) Control device and method for determining data format
CN108513357B (en) Method and equipment for avoiding resource collision in motorcade
CN114301907B (en) Service processing method, system and device in cloud computing network and electronic equipment
CN115589396A (en) Service management method, system, device, electronic equipment and storage medium
CN114138466A (en) Task cooperative processing method and device for intelligent highway and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination