CN112988346A - Task processing method, device, equipment and storage medium - Google Patents

Task processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN112988346A
CN112988346A CN202110174976.0A CN202110174976A CN112988346A CN 112988346 A CN112988346 A CN 112988346A CN 202110174976 A CN202110174976 A CN 202110174976A CN 112988346 A CN112988346 A CN 112988346A
Authority
CN
China
Prior art keywords
task
processed
drifting
drift
edge computing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110174976.0A
Other languages
Chinese (zh)
Other versions
CN112988346B (en
Inventor
李朝霞
康楠
邢鑫
成景山
李铭轩
李策
陈海波
时文丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Unicom Cloud Data Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Unicom Cloud Data Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd, Unicom Cloud Data Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202110174976.0A priority Critical patent/CN112988346B/en
Publication of CN112988346A publication Critical patent/CN112988346A/en
Application granted granted Critical
Publication of CN112988346B publication Critical patent/CN112988346B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Information Transfer Between Computers (AREA)
  • Power Sources (AREA)

Abstract

The application provides a task processing method, a device, a task processing device and a storage medium, wherein the method comprises the steps of receiving a task drifting request sent by an edge computing server, the task drifting request carries a task to be processed, then determining a target computing resource device corresponding to the task to be processed, further establishing a preset channel between the edge computing server and the target computing resource device, controlling the task to be processed to perform task drifting through the preset channel, so that the target computing resource device completes processing of the task to be processed, namely, the embodiment of the application can allocate computing resources for the task to be processed in the edge computing server, establish a channel between the edge computing server and the computing resources, process the task to be processed by utilizing the computing resources based on the channel, and solve the problem that the edge computing server has poor processing quality on delay sensitive application, and enables the edge compute server to respond quickly to bursty compute requests.

Description

Task processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of internet of things technology, and in particular, to a task processing method and apparatus, a task processing device, and a storage medium.
Background
With the continuous development of random technologies, more and more internet of things applications or edge applications are available. Many existing applications of the internet of things, such as virtual reality and unmanned driving, are delay-sensitive.
In the related art, the computing power of the edge computing server is very limited, and it cannot respond quickly to all the burst computing requests. Thus, some delay-sensitive computing tasks may experience long queuing delays at the edge servers, which may even exceed the network delay from the user network to the remote cloud computing center, resulting in an extremely poor user experience for delay-sensitive applications.
Then, there is no effective solution to the above problems. Therefore, how to improve the processing quality of the edge computing server for the delay-sensitive application and quickly respond to the sudden computing request becomes an urgent problem to be solved.
Disclosure of Invention
In order to solve the problems in the prior art, the application provides a task processing method, a task processing device and a task processing storage medium.
In a first aspect, an embodiment of the present application provides a task processing method, including the following steps:
receiving a task drifting request sent by an edge computing server, wherein the task drifting request carries a task to be processed;
determining target computing resource equipment corresponding to the task to be processed;
establishing a preset channel between the edge computing server and the target computing resource equipment;
and controlling the task to be processed to perform task drifting through the preset channel so as to enable the target computing resource equipment to complete processing of the task to be processed.
In one possible implementation manner, after the preset channel is established between the edge computing server and the target computing resource device, the method further includes:
setting the priority of the preset channel according to the task to be processed;
through the preset channel, the task to be processed is controlled to perform task drifting, and the method comprises the following steps:
and performing task drifting processing on the task to be processed through the preset channel based on the priority.
In a possible implementation manner, the controlling, by the preset channel, the task to be processed to perform task drifting includes:
and sending a drift starting instruction to the edge computing server so that the edge computing server sends the task to be processed to the target computing resource equipment through the preset channel according to the drift starting instruction.
In a possible implementation manner, after the sending the drift starting instruction to the edge computing server, the method further includes:
receiving the task drift progress and the task drift log reported by the edge computing server, and receiving the equipment state reported by the target computing resource equipment;
and if the task drift needs to be stopped according to the task drift progress, the task drift log and/or the equipment state, sending a drift stop instruction to the edge computing server so that the edge computing server stops sending the to-be-processed task through the preset channel according to the drift stop instruction.
In a possible implementation manner, after the controlling the task to be processed to perform task drifting through the preset channel, the method further includes:
acquiring a processing result of the target computing resource equipment on the task to be processed;
and sending the processing result to the edge computing server and/or other preset modules.
In a possible implementation manner, after the obtaining of the processing result of the target computing resource device on the task to be processed, the method further includes:
and disconnecting the preset channel.
In a second aspect, an embodiment of the present application provides a task processing device, including:
the system comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving a task drifting request sent by an edge computing server, and the task drifting request carries a task to be processed;
the determining module is used for determining target computing resource equipment corresponding to the task to be processed;
the establishing module is used for establishing a preset channel between the edge computing server and the target computing resource equipment;
and the drifting module is used for controlling the task to be processed to perform task drifting through the preset channel so as to enable the target computing resource equipment to complete processing of the task to be processed.
In a possible implementation manner, the drift module is specifically configured to:
setting the priority of the preset channel according to the task to be processed;
and performing task drifting processing on the task to be processed through the preset channel based on the priority.
In a possible implementation manner, the drift module is specifically configured to:
and sending a drift starting instruction to the edge computing server so that the edge computing server sends the task to be processed to the target computing resource equipment through the preset channel according to the drift starting instruction.
In a possible implementation manner, the drift module is further configured to:
receiving the task drift progress and the task drift log reported by the edge computing server, and receiving the equipment state reported by the target computing resource equipment;
and if the task drift needs to be stopped according to the task drift progress, the task drift log and/or the equipment state, sending a drift stop instruction to the edge computing server so that the edge computing server stops sending the to-be-processed task through the preset channel according to the drift stop instruction.
In a possible implementation manner, after the drifting module controls the task to be processed to perform task drifting through the preset channel, the system further includes a post-processing module, configured to:
acquiring a processing result of the target computing resource equipment on the task to be processed;
and sending the processing result to the edge computing server and/or other preset modules.
In one possible implementation manner, the post-processing module is further configured to:
and disconnecting the preset channel.
In a third aspect, an embodiment of the present application provides a task processing device, including:
a processor;
a memory; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor, the computer program comprising instructions for performing the method of the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and the computer program causes a server to execute the method according to the first aspect.
In a fifth aspect, the present application provides a computer program product, which includes computer instructions for executing the method of the first aspect by a processor.
The method includes receiving a task drifting request sent by an edge computing server, the task drifting request carrying a task to be processed, then determining a target computing resource device corresponding to the task to be processed, further establishing a preset channel between the edge computing server and the target computing resource device, and controlling the task to be processed to perform task drifting through the preset channel so that the target computing resource device completes processing of the task to be processed, namely, the method can allocate computing resources for the task to be processed in the edge computing server, establish a channel between the edge computing server and the computing resources, process the task to be processed by utilizing the computing resources of the channel based on the channel, and solve the problem that the edge computing server has poor processing quality for delay-sensitive applications, and the edge computing server can make a quick response to the burst computing request, thereby meeting the requirements of practical application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 is a schematic diagram of a task processing system architecture according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a task processing method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of another task processing method provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of a task processing device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of another task processing device according to an embodiment of the present application;
FIG. 6A provides one possible basic hardware architecture for a task processing device as described herein;
fig. 6B provides another possible basic hardware architecture for a task processing device as described herein.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," and "fourth," if any, in the description and claims of this application and the above-described figures are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the related art, the computing power of the edge computing server is very limited, and it cannot respond quickly to all the burst computing requests. Thus, some delay-sensitive computing tasks may experience long queuing delays at the edge servers, which may even exceed the network delay from the user network to the remote cloud computing center, resulting in an extremely poor user experience for delay-sensitive applications.
However, there is no effective solution to the above problems. Therefore, how to improve the processing quality of the edge computing server for the delay-sensitive application and quickly respond to the sudden computing request becomes an urgent problem to be solved.
In order to solve the above problem, an embodiment of the present application provides a task processing method, which may allocate a computing resource for a to-be-processed task in an edge computing server, establish a channel between the edge computing server and the computing resource, process the to-be-processed task by using the channel computing resource based on the channel, solve the problem that the processing quality of the edge computing server for a delay-sensitive application is poor, enable the edge computing server to quickly respond to a sudden computing request, and meet the actual application needs.
Optionally, a task processing method provided in the present application may be applied to the schematic architecture of the task processing system shown in fig. 1, and as shown in fig. 1, the system may include at least one of a receiving device 101, a drifting device 102, and a display device 103.
In a specific implementation process, the receiving device 101 may be an input/output interface or a communication interface, and may be configured to receive a task drifting request sent by an edge computing server, where the task drifting request carries information such as a task to be processed.
The drifting device 102 may allocate computing resources for the to-be-processed task in the edge computing server, establish a channel between the edge computing server and the computing resources, and process the to-be-processed task by using the channel computing resources based on the channel, thereby solving the problem that the edge computing server has poor processing quality for the delay-sensitive application, and enabling the edge computing server to make a quick response to the sudden computing request, thereby satisfying the actual application needs.
The display device 103 may be used to display the above-described tasks to be processed, computing resources, and the like.
The display device may also be a touch display screen for receiving user instructions while displaying the above-mentioned content to enable interaction with a user.
It should be understood that the processor may be implemented by reading instructions in the memory and executing the instructions, or may be implemented by a chip circuit.
The system is only an exemplary system, and when the system is implemented, the system can be set according to application requirements.
It is to be understood that the illustrated architecture of the embodiments of the present application does not constitute a specific limitation on the architecture of the task processing system. In other possible embodiments of the present application, the foregoing architecture may include more or less components than those shown in the drawings, or combine some components, or split some components, or arrange different components, which may be determined according to practical application scenarios, and is not limited herein. The components shown in fig. 1 may be implemented in hardware, software, or a combination of software and hardware.
In addition, the system architecture and the service scenario described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not constitute a limitation to the technical solution provided in the embodiment of the present application, and it can be known by a person skilled in the art that along with the evolution of the system architecture and the appearance of a new service scenario, the technical solution provided in the embodiment of the present application is also applicable to similar technical problems.
The technical solutions of the present application are described below with several embodiments as examples, and the same or similar concepts or processes may not be described in detail in some embodiments.
Fig. 2 provides a flowchart of a task processing method, which may be performed by any device that performs the task processing method, and the device may be implemented by software and/or hardware. As shown in fig. 2, on the basis of the system architecture shown in fig. 1, the task processing provided by the embodiment of the present application may include the following steps:
s201: and receiving a task drifting request sent by the edge computing server, wherein the task drifting request carries a task to be processed.
Here, the task to be processed may be a delay-sensitive task, or a sudden computation request task, and the like, which may be determined according to actual situations, and this is not particularly limited in this embodiment of the application.
Taking the system shown in fig. 1 as an example, the receiving device may receive a task drifting request sent by the edge computing server, and may store the task drifting request in a preset queue. The receiving device can store the task drifting requests in the preset queue according to the receiving sequence, so that a subsequent drifting device can execute subsequent task drifting operation by detecting whether data exists in the preset queue or not under the condition that the data exists, and the method is suitable for practical application.
S202: and determining target computing resource equipment corresponding to the task to be processed.
In this embodiment of the present application, the drifting device receives, at the receiving device, a task drifting request sent by the edge computing server, and may determine the target computing power resource device according to a to-be-processed task carried in the task drifting request. For example, the drift device may pre-store a corresponding relationship between the task and the computational resource device, so as to determine the target computational resource device corresponding to the task to be processed according to the corresponding relationship.
The above-mentioned correspondence relationship may be determined by the drift device according to a large number of known tasks and computational resources for processing the tasks.
S203: and establishing a preset channel between the edge computing server and the target computing resource equipment.
Here, the drifting device may establish a software defined network (sdn) channel between the edge computing server and the target computing resource device, where a bandwidth is determined according to an actual situation, so that a subsequent drifting operation is performed through the sdn channel to transmit data at a high speed and with a low delay.
For example, after the drift device establishes a preset channel between the edge computing server and the target computing resource device, the drift device may further set a priority of the preset channel according to the task to be processed, so that the task to be processed is subsequently subjected to task drift processing through the preset channel based on the priority.
The priority of the channels corresponding to different tasks is different, for example, the priority of the delay sensitive task or the priority of the corresponding channel is high for a sudden computation request task.
In the embodiment of the application, when the task to be processed is a delay-sensitive task or a sudden computation request task, the priority of the preset channel is set to be a high priority, so that the task to be processed is subjected to task drift processing with subsequent priority, the problem that an edge computation server has poor processing quality for delay-sensitive applications is solved, and the edge computation server can make a quick response to the sudden computation request.
In addition, the drift device may pre-store a correspondence between a task and a priority of a channel, and then determine the priority of the channel corresponding to the task to be processed according to the correspondence, so as to set the priority of the preset channel, and then perform task drift processing on the task to be processed through the preset channel based on the priority.
The corresponding relationship includes a delay sensitive task or a sudden computing request task, and the priority of the corresponding channel is high.
S204: and controlling the task to be processed to perform task drifting through the preset channel so that the target computing resource equipment completes processing of the task to be processed.
For example, after controlling the task to be processed to perform task drifting through the preset channel, the drifting device may obtain a processing result of the target computational resource device on the task to be processed, and then send the processing result to the edge computing server and/or other preset modules, so that the edge computing server and/or other preset modules may know the processing result in time to perform subsequent processing, thereby meeting application requirements.
In addition, after the processing result of the target computing resource device on the task to be processed is obtained, the drifting device may disconnect the preset channel, so that the computing network recovers the daily service.
In the embodiment of the application, by receiving a task drift request sent by an edge computing server, the task drift request carrying a task to be processed, and then determining a target computing resource device corresponding to the task to be processed, further establishing a preset channel between the edge computing server and the target computing resource device, and controlling the task to be processed to drift through the preset channel, so that the target computing resource device completes processing of the task to be processed, that is, the embodiment of the application can allocate computing resources to the task to be processed in the edge computing server, establish a channel between the edge computing server and the computing resources, process the task to be processed by using the computing resources of the channel based on the channel, solve the problem that the edge computing server has poor processing quality for delay-sensitive applications, and enable the edge computing server to make a quick response to a burst computing request, meeting the requirements of practical application.
In addition, in the embodiment of the present application, when the to-be-processed task is controlled to perform task drift through the preset channel, it is considered that a drift start instruction is sent to the edge computing server, and after the drift start instruction is sent, a task drift progress and a task drift log reported by the edge computing server are received, a device state reported by the target computing resource device is received, and whether task drift needs to be stopped is determined. Fig. 3 is a flowchart illustrating another task processing method according to an embodiment of the present application. As shown in fig. 3, the method includes:
s301: and receiving a task drifting request sent by the edge computing server, wherein the task drifting request carries a task to be processed.
S302: and determining target computing resource equipment corresponding to the task to be processed.
S303: and establishing a preset channel between the edge computing server and the target computing resource equipment.
In steps S301 to S303, reference is made to the related description of steps S201 to S203, which is not described herein again.
S304: and sending a drift starting instruction to the edge computing server, so that the edge computing server sends the to-be-processed task to the target computing resource device through the preset channel according to the drift starting instruction, and the target computing resource device completes processing of the to-be-processed task.
Here, when the drifting device controls the to-be-processed task to perform task drifting through the preset channel, the drifting device sends a drifting start instruction to the edge computing server, so that the drifting device can communicate information with the edge computing server in time, and the edge computing server starts sending the to-be-processed task to the target computing resource device through the preset channel based on the instruction, thereby avoiding resource waste and being suitable for application.
S305: and receiving the task drift progress and the task drift log reported by the edge computing server, and receiving the equipment state reported by the target computing resource equipment.
S306: and if the task drift needs to be stopped according to the task drift progress, the task drift log and/or the equipment state, sending a drift stop instruction to the edge computing server so that the edge computing server stops sending the to-be-processed task through the preset channel according to the drift stop instruction.
In this embodiment, after sending the drift start instruction to the edge computing server, the drift device receives the task drift progress and the task drift log reported by the edge computing server, and receives the device status reported by the target computing resource device, so as to determine whether to stop task drift based on the received information. And if so, the drifting stop instruction is sent to the edge computing server by the drifting device, and the edge computing server stops sending the task to be processed through the preset channel based on the instruction, so that the application requirements of various application scenes are met.
The drifting device can preset a condition for stopping task drifting, and when the task drifting progress, the task drifting log and/or the equipment state meet the preset condition for stopping task drifting, the task drifting needing to be stopped is judged. Here, the preset stop task drifting condition may include that the task drifting progress reaches a preset progress threshold, that the task drifting log includes preset content, and/or that the device status is a preset status, and the like.
In addition, after the drift stop instruction is sent to the edge computing server by the drift device, the task drift progress and the task drift log reported by the edge computing server can be received in real time, and the equipment state reported by the target computing resource equipment can be received, so that whether the task drift needs to be stopped or not is judged according to the task drift progress, the task drift log and/or the equipment state. And if not, the drifting device sends a drifting recovery instruction to the edge computing server, and the edge computing server recovers to send the task to be processed through the preset channel based on the instruction.
In this embodiment, when the drifting device controls the to-be-processed task to perform task drifting through the preset channel, the drifting device sends a drifting start instruction to the edge computing server, so that the drifting device can communicate information with the edge computing server in time, and the edge computing server starts to send the to-be-processed task to the target computing resource device through the preset channel based on the instruction, thereby avoiding resource waste. And after sending a drift starting instruction, the drift device receives the task drift progress and the task drift log reported by the edge computing server, receives the equipment state reported by the target computing resource equipment, judges whether the task drift needs to be stopped, and sends a drift stopping instruction to the edge computing server if the task drift needs to be stopped, wherein the edge computing server stops sending the task to be processed through the preset channel based on the instruction, so that the application requirements of various application scenes are met. In addition, the drift device can allocate computing resources for the tasks to be processed in the edge computing server, establish a channel between the edge computing server and the computing resources, process the tasks to be processed by using the channel computing resources based on the channel, solve the problem that the edge computing server has poor processing quality for delay-sensitive applications, and enable the edge computing server to quickly respond to sudden computing requests.
Fig. 4 is a schematic structural diagram of a task processing device according to an embodiment of the present application, corresponding to the task processing method according to the foregoing embodiment. For convenience of explanation, only portions related to the embodiments of the present application are shown. Fig. 4 is a schematic structural diagram of a task processing device according to an embodiment of the present application, where the task processing device 40 includes: a receiving module 401, a determining module 402, a establishing module 403 and a drift module 404. The task processing device may be the task processing apparatus itself, or a chip or an integrated circuit that implements the functions of the task processing apparatus. It should be noted here that the division of the receiving module, the determining module, the establishing module, and the drifting module is only a division of logical functions, and the two may be integrated or independent physically.
The receiving module 401 is configured to receive a task drifting request sent by an edge computing server, where the task drifting request carries a task to be processed.
A determining module 402, configured to determine a target computing resource device corresponding to the task to be processed.
An establishing module 403, configured to establish a preset channel between the edge computing server and the target computing resource device.
A drifting module 404, configured to control the to-be-processed task to perform task drifting through the preset channel, so that the target computing resource device completes processing of the to-be-processed task.
In a possible implementation manner, the drift module 404 is specifically configured to:
setting the priority of the preset channel according to the task to be processed;
and performing task drifting processing on the task to be processed through the preset channel based on the priority.
In a possible implementation manner, the drift module 404 is specifically configured to:
and sending a drift starting instruction to the edge computing server so that the edge computing server sends the task to be processed to the target computing resource equipment through the preset channel according to the drift starting instruction.
In a possible implementation manner, the drift module 404 is further configured to:
receiving the task drift progress and the task drift log reported by the edge computing server, and receiving the equipment state reported by the target computing resource equipment;
and if the task drift needs to be stopped according to the task drift progress, the task drift log and/or the equipment state, sending a drift stop instruction to the edge computing server so that the edge computing server stops sending the to-be-processed task through the preset channel according to the drift stop instruction.
The apparatus provided in the embodiment of the present application may be configured to implement the technical solution of the method embodiment, and the implementation principle and the technical effect are similar, which are not described herein again in the embodiment of the present application.
Fig. 5 is a schematic structural diagram of another task processing device according to an embodiment of the present application, and based on fig. 4, the task processing device 40 further includes: a post-processing module 405.
In a possible implementation manner, after the drifting module 404 controls the task to be processed to perform task drifting through the preset channel, the post-processing module 405 is configured to:
acquiring a processing result of the target computing resource equipment on the task to be processed;
and sending the processing result to the edge computing server and/or other preset modules.
In a possible implementation manner, the post-processing module 405 is further configured to:
and disconnecting the preset channel.
The apparatus provided in the embodiment of the present application may be configured to implement the technical solution of the method embodiment, and the implementation principle and the technical effect are similar, which are not described herein again in the embodiment of the present application.
Alternatively, fig. 6A and 6B each schematically provide one possible basic hardware architecture of the task processing device described herein.
Referring to fig. 6A and 6B, the task processing device includes at least one processor 601 and a communication interface 603. Further optionally, a memory 602 and a bus 604 may also be included.
Among them, in the task processing device, the number of the processors 601 may be one or more, and fig. 6A and 6B illustrate only one of the processors 601. Alternatively, the processor 601 may be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or a Digital Signal Processing (DSP). If the task processing device has a plurality of processors 601, the types of the plurality of processors 601 may be different, or may be the same. Alternatively, the plurality of processors 601 of the task processing device may also be integrated into a multi-core processor.
Memory 602 stores computer instructions and data; the memory 602 may store computer instructions and data required to implement the above-described task processing methods provided herein, e.g., the memory 602 stores instructions for implementing the steps of the above-described task processing methods. The memory 602 may be any one or any combination of the following storage media: nonvolatile Memory (e.g., Read-Only Memory (ROM), Solid State Drive (SSD), Hard Disk Drive (HDD), optical disc), volatile Memory.
The communication interface 603 may provide information input/output for the at least one processor. Any one or any combination of the following devices may also be included: a network interface (e.g., an ethernet interface), a wireless network card, etc. having a network access function.
Optionally, the communication interface 603 may also be used for data communication between the task processing device and other computing devices or terminals.
Further alternatively, fig. 6A and 6B show the bus 604 by a thick line. The bus 604 may connect the processor 601 with the memory 602 and the communication interface 603. Thus, via bus 604, processor 601 may access memory 602 and may also interact with other computing devices or terminals using communication interface 603.
In the present application, the task processing device executes computer instructions in the memory 602, so that the task processing device implements the task processing method provided by the present application, or the task processing device deploys the task processing apparatus.
From the perspective of logical functional division, illustratively, as shown in fig. 6A, the memory 602 may include therein a receiving module 401, a determining module 402, a establishing module 403, and a drifting module 404. The inclusion herein merely relates to that instructions stored in the memory may, when executed, implement the functionality of the receiving module, the determining module, the establishing module, and the drifting module, respectively, and is not limited to physical structures.
Illustratively, as shown in fig. 6B, a post-processing module 405 may also be included in the memory 602. The inclusion herein merely refers to the instructions stored in the memory when executed may also implement the functionality of the post-processing module and is not limited to a physical structure.
In addition, the task processing device may be implemented by software as shown in fig. 6A and 6B, or may be implemented by hardware as a hardware module or a circuit unit.
The present application provides a computer-readable storage medium storing a computer program that causes a server to execute the above-described task processing method provided by the present application.
The present application provides a computer program product comprising computer instructions for executing the above task processing method provided by the present application by a processor.
The present application provides a chip comprising at least one processor and a communication interface providing information input and/or output for the at least one processor. Further, the chip may also include at least one memory for storing computer instructions. The at least one processor is used for calling and executing the computer instructions to execute the task processing method provided by the application.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.

Claims (10)

1. A task processing method, comprising:
receiving a task drifting request sent by an edge computing server, wherein the task drifting request carries a task to be processed;
determining target computing resource equipment corresponding to the task to be processed;
establishing a preset channel between the edge computing server and the target computing resource equipment;
and controlling the task to be processed to perform task drifting through the preset channel so as to enable the target computing resource equipment to complete processing of the task to be processed.
2. The method of claim 1, after establishing a preset channel between the edge computing server and the target computing resource device, further comprising:
setting the priority of the preset channel according to the task to be processed;
through the preset channel, the task to be processed is controlled to perform task drifting, and the method comprises the following steps:
and performing task drifting processing on the task to be processed through the preset channel based on the priority.
3. The method according to claim 1 or 2, wherein the controlling the task to be processed to perform task drifting through the preset channel comprises:
and sending a drift starting instruction to the edge computing server so that the edge computing server sends the task to be processed to the target computing resource equipment through the preset channel according to the drift starting instruction.
4. The method of claim 3, after the sending a drift start instruction to the edge computing server, further comprising:
receiving the task drift progress and the task drift log reported by the edge computing server, and receiving the equipment state reported by the target computing resource equipment;
and if the task drift needs to be stopped according to the task drift progress, the task drift log and/or the equipment state, sending a drift stop instruction to the edge computing server so that the edge computing server stops sending the to-be-processed task through the preset channel according to the drift stop instruction.
5. The method according to claim 1 or 2, wherein after the controlling the task to be processed to perform task drifting through the preset channel, the method further comprises:
acquiring a processing result of the target computing resource equipment on the task to be processed;
and sending the processing result to the edge computing server and/or other preset modules.
6. The method according to claim 5, further comprising, after the obtaining of the processing result of the target computing resource device on the task to be processed, the following steps:
and disconnecting the preset channel.
7. A task processing apparatus, comprising:
the system comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving a task drifting request sent by an edge computing server, and the task drifting request carries a task to be processed;
the determining module is used for determining target computing resource equipment corresponding to the task to be processed;
the establishing module is used for establishing a preset channel between the edge computing server and the target computing resource equipment;
and the drifting module is used for controlling the task to be processed to perform task drifting through the preset channel so as to enable the target computing resource equipment to complete processing of the task to be processed.
8. A task processing device characterized by comprising:
a processor;
a memory; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor, the computer program comprising instructions for performing the method of any of claims 1-6.
9. A computer-readable storage medium, characterized in that it stores a computer program that causes a server to execute the method of any one of claims 1-6.
10. A computer program product comprising computer instructions for executing the method of any one of claims 1-6 by a processor.
CN202110174976.0A 2021-02-07 2021-02-07 Task processing method, device, equipment and storage medium Active CN112988346B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110174976.0A CN112988346B (en) 2021-02-07 2021-02-07 Task processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110174976.0A CN112988346B (en) 2021-02-07 2021-02-07 Task processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112988346A true CN112988346A (en) 2021-06-18
CN112988346B CN112988346B (en) 2024-02-23

Family

ID=76347952

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110174976.0A Active CN112988346B (en) 2021-02-07 2021-02-07 Task processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112988346B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113641124A (en) * 2021-08-06 2021-11-12 珠海格力电器股份有限公司 Calculation force distribution method and device, controller and building control system
CN114900860A (en) * 2022-05-05 2022-08-12 中国联合网络通信集团有限公司 Mobile terminal edge calculation method, device, edge calculation server and medium
CN115587103A (en) * 2022-12-07 2023-01-10 杭州华橙软件技术有限公司 Algorithm resource planning method, device, terminal and computer readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108494612A (en) * 2018-01-19 2018-09-04 西安电子科技大学 A kind of network system and its method of servicing that mobile edge calculations service is provided
CN108683613A (en) * 2018-05-10 2018-10-19 Oppo广东移动通信有限公司 A kind of method, apparatus and computer storage media of scheduling of resource
CN110062026A (en) * 2019-03-15 2019-07-26 重庆邮电大学 Mobile edge calculations resources in network distribution and calculating unloading combined optimization scheme
CN110381159A (en) * 2019-07-26 2019-10-25 中国联合网络通信集团有限公司 Task processing method and system
CN110460635A (en) * 2019-07-04 2019-11-15 华南理工大学 One kind is towards unpiloted edge discharging method and device
CN110856183A (en) * 2019-11-18 2020-02-28 南京航空航天大学 Edge server deployment method based on heterogeneous load complementation and application
CN111625354A (en) * 2020-05-19 2020-09-04 南京乐贤智能科技有限公司 Arrangement method of computing power of edge computing equipment and related equipment thereof
CN111641891A (en) * 2020-04-16 2020-09-08 北京邮电大学 Task peer-to-peer unloading method and device in multi-access edge computing system
WO2020216135A1 (en) * 2019-04-25 2020-10-29 南京邮电大学 Multi-user multi-mec task unloading resource scheduling method based on edge-end collaboration

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108494612A (en) * 2018-01-19 2018-09-04 西安电子科技大学 A kind of network system and its method of servicing that mobile edge calculations service is provided
CN108683613A (en) * 2018-05-10 2018-10-19 Oppo广东移动通信有限公司 A kind of method, apparatus and computer storage media of scheduling of resource
CN110062026A (en) * 2019-03-15 2019-07-26 重庆邮电大学 Mobile edge calculations resources in network distribution and calculating unloading combined optimization scheme
WO2020216135A1 (en) * 2019-04-25 2020-10-29 南京邮电大学 Multi-user multi-mec task unloading resource scheduling method based on edge-end collaboration
CN110460635A (en) * 2019-07-04 2019-11-15 华南理工大学 One kind is towards unpiloted edge discharging method and device
CN110381159A (en) * 2019-07-26 2019-10-25 中国联合网络通信集团有限公司 Task processing method and system
CN110856183A (en) * 2019-11-18 2020-02-28 南京航空航天大学 Edge server deployment method based on heterogeneous load complementation and application
CN111641891A (en) * 2020-04-16 2020-09-08 北京邮电大学 Task peer-to-peer unloading method and device in multi-access edge computing system
CN111625354A (en) * 2020-05-19 2020-09-04 南京乐贤智能科技有限公司 Arrangement method of computing power of edge computing equipment and related equipment thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
曹普: "移动边缘计算中的任务调度与资源分配", 《中国优秀硕士学位论文全文数据库信息科技辑》, pages 136 - 904 *
陈敏: "《人工智能通信理论与方法》", 华中科技大学出版社, pages: 60 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113641124A (en) * 2021-08-06 2021-11-12 珠海格力电器股份有限公司 Calculation force distribution method and device, controller and building control system
CN113641124B (en) * 2021-08-06 2023-03-10 珠海格力电器股份有限公司 Calculation force distribution method and device, controller and building control system
CN114900860A (en) * 2022-05-05 2022-08-12 中国联合网络通信集团有限公司 Mobile terminal edge calculation method, device, edge calculation server and medium
CN114900860B (en) * 2022-05-05 2024-04-02 中国联合网络通信集团有限公司 Edge computing method and device for mobile terminal, edge computing server and medium
CN115587103A (en) * 2022-12-07 2023-01-10 杭州华橙软件技术有限公司 Algorithm resource planning method, device, terminal and computer readable storage medium

Also Published As

Publication number Publication date
CN112988346B (en) 2024-02-23

Similar Documents

Publication Publication Date Title
CN112988346B (en) Task processing method, device, equipment and storage medium
US10698717B2 (en) Accelerator virtualization method and apparatus, and centralized resource manager
US20180285053A1 (en) Screenshot method and apparatus
US9251040B2 (en) Remote debugging in a cloud computing environment
CN108369604B (en) Method and device for sharing file system by multiple operating systems and electronic equipment
CN105979007B (en) Method and device for accelerating resource processing and network function virtualization system
CN109995877B (en) Information pushing method and device
CN111163130B (en) Network service system and data transmission method thereof
US20100161559A1 (en) Maintaining a network connection of a workload during transfer
CN106569917B (en) Data backup method and mobile terminal
KR20200034572A (en) Request processing method and apparatus
CN111510493B (en) Distributed data transmission method and device
CN115445194A (en) Rendering method, device and equipment of game and storage medium
CN110032364B (en) Data processing method, device, electronic equipment and computer storage medium
CN115237589A (en) SR-IOV-based virtualization method, device and equipment
CN114598931A (en) Streaming method, system, device and medium for multi-open cloud game
CN114296953A (en) Multi-cloud heterogeneous system and task processing method
CN116069493A (en) Data processing method, device, equipment and readable storage medium
CN114327846A (en) Cluster capacity expansion method and device, electronic equipment and computer readable storage medium
CN116662009A (en) GPU resource allocation method and device, electronic equipment and storage medium
CN110955461A (en) Processing method, device and system of computing task, server and storage medium
CN115840648A (en) Simulation task processing method and device and electronic equipment
CN116243853A (en) Data transmission method and device, electronic equipment and nonvolatile storage medium
CN112968812A (en) Network performance testing method, device, equipment and storage medium
CN110971696B (en) System and method for realizing virtual electronic card communication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant