CN117453396A - Task data processing method and device based on edge calculation and electronic equipment - Google Patents

Task data processing method and device based on edge calculation and electronic equipment Download PDF

Info

Publication number
CN117453396A
CN117453396A CN202311392596.XA CN202311392596A CN117453396A CN 117453396 A CN117453396 A CN 117453396A CN 202311392596 A CN202311392596 A CN 202311392596A CN 117453396 A CN117453396 A CN 117453396A
Authority
CN
China
Prior art keywords
edge computing
processing
task data
data
computing server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311392596.XA
Other languages
Chinese (zh)
Inventor
覃碧玉
蒋飞虎
唐忠宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Beiliande Industrial Co ltd
Original Assignee
Shenzhen Beiliande Industrial Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Beiliande Industrial Co ltd filed Critical Shenzhen Beiliande Industrial Co ltd
Priority to CN202311392596.XA priority Critical patent/CN117453396A/en
Publication of CN117453396A publication Critical patent/CN117453396A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • A63F13/358Adapting the game course according to the network or server load, e.g. for reducing latency due to different connection speeds between clients
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

A task data processing method and device based on edge calculation and electronic equipment relate to the field of data storage. Task data operated by a user terminal are obtained, wherein the task data comprise first task data and second task data; judging whether the first data processing difficulty of the first task data is greater than the preset processing difficulty or not; judging whether the second data processing difficulty of the second task data is smaller than or equal to the preset processing difficulty; if the first data processing difficulty is greater than the preset processing difficulty, the first task data is sent to a cloud server for processing; and if the second data processing difficulty is smaller than or equal to the preset processing difficulty, sending the second task data to the edge computing server for processing. According to the technical scheme, the problems that delay, smoothness reduction and the like still occur when a user plays a cloud game are solved, and the cloud game experience of the user is reduced.

Description

Task data processing method and device based on edge calculation and electronic equipment
Technical Field
The present disclosure relates to the field of data storage, and in particular, to a method and an apparatus for processing task data based on edge computing, and an electronic device.
Background
The core idea of edge computing is to construct computing resources, namely edge computing servers, in the vicinity of the user terminals, so that a part of data processing tasks are performed by the edge computing servers in the vicinity of the user terminals, and further, the resource occupation of cloud servers is reduced.
Today, when a user plays a cloud game, task data mostly generated by a user terminal is generally processed by a cloud server, and a part of the task data is handed to an edge computing server for processing. However, in the above method, task data processed by the cloud server and the edge computing server are randomly distributed, at this time, when the edge server processes task data with higher data processing difficulty, the data processing capability of the edge computing server is lower than that of the cloud server, so that problems of delay, smoothness reduction and the like still occur when a user plays a cloud game, and cloud game experience of the user is reduced, and therefore, a task data processing method, device and electronic equipment based on edge computing are needed.
Disclosure of Invention
The application provides a task data processing method, device and electronic equipment based on edge calculation, which solve the problems that delay, smoothness reduction and the like still occur when a user plays a cloud game, and reduce the cloud game experience of the user.
In a first aspect of the present application, a method for processing task data based on edge computation is provided, where the method is applied to a server, and the method specifically includes the following steps: task data operated by a user terminal are obtained, wherein the task data comprise first task data and second task data; judging whether the first data processing difficulty of the first task data is greater than the preset processing difficulty or not; judging whether the second data processing difficulty of the second task data is smaller than or equal to the preset processing difficulty; if the first data processing difficulty is greater than the preset processing difficulty, the first task data is sent to a cloud server for processing; and if the second data processing difficulty is smaller than or equal to the preset processing difficulty, sending the second task data to the edge computing server for processing.
By adopting the technical scheme, the preset processing difficulty is set, and the data processing difficulty and the magnitude relation of the preset processing difficulty of the task data are judged, so that the task data with the data processing difficulty greater than the preset processing difficulty are sent to the cloud server for processing, namely, the first task data, and the game character data with the data processing difficulty less than or equal to the preset processing difficulty are sent to the edge computing server for processing, so that the edge computing server can stably process a part of fixed task data, and the game stability of a user in the process of cloud game is improved.
Optionally, the data processing capability of the edge computing server is lower than that of the cloud server, and the transmission efficiency of the edge computing server is higher than that of the cloud server.
By adopting the technical scheme, the edge computing server is close to the user, can respond to the request of the user more quickly, provides low-delay experience, and simultaneously leaves task data with complex computation, namely task data with data processing difficulty higher than preset processing difficulty, for the cloud server with stronger processing performance to process.
Optionally, before sending the second task data to the edge computing server for processing if the second data processing difficulty is less than or equal to the preset processing difficulty, the method further includes: acquiring a plurality of edge computing servers in a preset range of a user terminal, wherein the plurality of edge computing servers comprise a first edge computing server and a second edge computing server, and the first edge computing server and the second edge computing server are any two different edge computing servers in the plurality of edge computing servers; judging whether the first transmission efficiency of the first edge computing server is greater than the second transmission efficiency of the second edge computing server; and if the first transmission efficiency of the first edge computing server is greater than the second transmission efficiency of the second edge computing server, selecting the first edge computing server as a main edge computing server for processing task data.
According to the technical scheme, the edge computing servers can be arranged on the periphery of the user, the edge computing servers can process task data distributed to the edge computing servers by the servers at the same time, and the edge server with highest transmission efficiency is selected from the edge computing servers to serve as the main edge computing server, so that the user can obtain lower delay and faster transmission efficiency when playing a game, and the stability of task data transmission when playing the game can be ensured.
Optionally, if the first transmission efficiency of the first edge computing server is greater than the second transmission efficiency of the second edge computing server, selecting the first edge computing server as the main edge computing server for processing task data, specifically including: transmitting third task data to a plurality of edge computing servers through a user terminal; acquiring a plurality of processing times required by the third task data at a plurality of edge computing servers, wherein the plurality of processing times are the sum of the time required by the plurality of edge computing servers to receive the third task data, the time required by processing the third task data and the time required by returning a processing result to the user terminal; the plurality of processing times comprise a first processing time and a second processing time, wherein the first processing time is the processing time corresponding to the first edge computing server, and the second processing time is the processing time corresponding to the second edge computing server; judging whether the first processing time is longer than the second processing time; if the first processing time is longer than the second processing time, the first edge computing server is selected as a main edge computing server for processing task data.
By adopting the technical scheme, the server can confirm the edge computing server with highest transmission efficiency in the edge computing servers in the preset range around the user in various modes, can simultaneously send the same third task data to the edge computing servers through the user terminal, and can determine the edge computing server with highest transmission efficiency in the edge computing servers by comparing a plurality of processing times required by the edge computing servers for processing the third task data.
Optionally, after selecting the first edge computing server as the primary edge computing server for processing the task data if the first transmission efficiency of the first edge computing server is greater than the second transmission efficiency of the second edge computing server, the method further includes: acquiring a plurality of operation states of a plurality of edge computing servers at intervals of preset time periods, wherein the operation states comprise a first operation state and a second operation state, the first operation state is an operation state corresponding to the first edge computing server, and the second operation state is an operation state of the second edge computing server; judging whether the first operation state has operation faults, wherein the operation faults comprise firmware faults, storage faults and network faults; if the first running state has running fault, judging whether the second running state has running fault or not; and if the second operation state does not have operation faults, switching the main edge computing server of the task data to the second edge computing server.
By adopting the technical scheme, the server monitors the running states of the plurality of edge computing servers in real time, and when the running states of the main edge computing servers have running faults, the main edge computing servers can be switched to the edge computing server with the highest data transmission efficiency in other edge computing servers, so that the smooth running of the game is ensured.
Optionally, after the first data processing difficulty is greater than the preset processing difficulty and the second data processing difficulty is less than or equal to the preset processing difficulty, the first task data is sent to the cloud server for processing, and the second task data is sent to the edge computing server for processing, the method further includes: obtaining a game experience influence level of the second task data; judging whether the game experience influence level is larger than a preset influence level or not; and if the game experience influence level is greater than the preset influence level, sending the second task data to the cloud server for processing.
Through the technical scheme, after the server distributes the task data with the data processing difficulty smaller than or equal to the preset processing difficulty to the edge computing server, whether the data with the game experience influence level larger than the preset influence level exists in the task data or not is judged, if so, the server can change the processing mode of the task data, and the task data with the data larger than the preset influence level is distributed to the edge computing server for processing.
Optionally, the data processing difficulty includes motion data, animation data, and dynamic adjustment data; the game experience impact level includes a mission importance level, an NPC importance level, and a scenario level.
Through the technical scheme, in the first aspect, the server can allocate the data processing difficulty according to factors in multiple aspects such as the calculation complexity of task data, the data size, the real-time requirement and the like, can allocate the data difficulty into a plurality of data processing difficulties, and then selects one of the data processing difficulties as the preset processing difficulty; in a second aspect, the game experience impact level includes, but is not limited to, a task importance level, an NPC importance level, a scenario level, and the like, which may be important substantial game content in a game, and because the content affects the game experience of a user, it is required to ensure that game data corresponding to the content is processed as soon as possible and sent to a user terminal, and the cloud server may preferentially process the received higher game experience impact level.
In a second aspect of the present application, a task data processing device based on edge computation is provided, where the device is a server, and the server includes an acquisition module, a judgment module, and a processing module, where,
The acquisition module is used for acquiring task data operated by the user terminal, wherein the task data comprises first task data and second task data;
the judging module is used for judging whether the first data processing difficulty of the first task data is greater than the preset processing difficulty or not; judging whether the second data processing difficulty of the second task data is smaller than or equal to the preset processing difficulty;
the processing module is used for sending the first task data to the cloud server for processing if the first data processing difficulty is greater than the preset processing difficulty; and if the second data processing difficulty is smaller than or equal to the preset processing difficulty, sending the second task data to the edge computing server for processing.
Optionally, the processing module is configured to obtain a plurality of edge computing servers in a preset range of the user terminal before sending the second task data to the edge computing server for processing if the second data processing difficulty is less than or equal to the preset processing difficulty, where the plurality of edge computing servers include a first edge computing server and a second edge computing server, and the first edge computing server and the second edge computing server are any two different edge computing servers in the plurality of edge computing servers; judging whether the first transmission efficiency of the first edge computing server is greater than the second transmission efficiency of the second edge computing server; and if the first transmission efficiency of the first edge computing server is greater than the second transmission efficiency of the second edge computing server, selecting the first edge computing server as a main edge computing server for processing task data.
Optionally, the processing module is configured to select the first edge computing server as the main edge computing server for processing the task data if the first transmission efficiency of the first edge computing server is greater than the second transmission efficiency of the second edge computing server, and specifically includes: transmitting third task data to a plurality of edge computing servers through a user terminal; acquiring a plurality of processing times required by the third task data at a plurality of edge computing servers, wherein the plurality of processing times are the sum of the time required by the plurality of edge computing servers to receive the third task data, the time required by processing the third task data and the time required by returning a processing result to the user terminal; the plurality of processing times comprise a first processing time and a second processing time, wherein the first processing time is the processing time corresponding to the first edge computing server, and the second processing time is the processing time corresponding to the second edge computing server; judging whether the first processing time is longer than the second processing time; if the first processing time is longer than the second processing time, the first edge computing server is selected as a main edge computing server for processing task data.
Optionally, the processing module is configured to, after selecting the first edge computing server as the primary edge computing server for processing the task data if the first transmission efficiency of the first edge computing server is greater than the second transmission efficiency of the second edge computing server, further include: acquiring a plurality of operation states of a plurality of edge computing servers at intervals of preset time periods, wherein the operation states comprise a first operation state and a second operation state, the first operation state is an operation state corresponding to the first edge computing server, and the second operation state is an operation state of the second edge computing server; judging whether the first operation state has operation faults, wherein the operation faults comprise firmware faults, storage faults and network faults; if the first running state has running fault, judging whether the second running state has running fault or not; and if the second operation state does not have operation faults, switching the main edge computing server of the task data to the second edge computing server.
Optionally, the processing module is configured to send the first task data to the cloud server for processing if the first data processing difficulty is greater than the preset processing difficulty and the second data processing difficulty is less than or equal to the preset processing difficulty, and send the second task data to the edge computing server for processing, where the method further includes: obtaining a game experience influence level of the second task data; judging whether the game experience influence level is larger than a preset influence level or not; and if the game experience influence level is greater than the preset influence level, sending the second task data to the cloud server for processing.
In a third aspect of the present application there is provided an electronic device comprising a processor, a memory for storing instructions, a user interface and a network interface for communicating to other devices, the processor being for executing the instructions stored in the memory to cause the electronic device to perform a method as claimed in any one of the preceding claims.
In a fourth aspect of the present application, a computer readable storage medium is provided, the computer readable storage medium storing a computer program for performing a method according to any one of the above by a processor.
One or more technical solutions provided in the embodiments of the present application at least have the following technical effects or advantages:
1. the method comprises the steps of setting preset processing difficulty, judging the size relation between the data processing difficulty of task data and the preset processing difficulty, sending the task data with the data processing difficulty greater than the preset processing difficulty to a cloud server for processing, namely, first task data, and sending game character data with the data processing difficulty less than or equal to the preset processing difficulty to an edge computing server for processing, so that the edge computing server can stably process a part of fixed task data, and the game stability of a user in the cloud game is improved.
2. The periphery of the user can be provided with a plurality of edge computing servers, the plurality of edge computing servers can process task data distributed to the edge computing servers by the servers at the same time, and the edge server with highest transmission efficiency is selected from the plurality of edge computing servers to serve as a main edge computing server, so that the user can obtain lower delay and faster transmission efficiency when playing games, and the stability of task data transmission when playing games can be ensured.
3. The server monitors the operation states of the plurality of edge computing servers in real time, and when the operation states of the main edge computing servers have operation faults, the main edge computing servers can be switched to the edge computing server with the highest data transmission efficiency in other edge computing servers, so that smooth operation of a user in game playing is ensured.
Drawings
Fig. 1 is a schematic flow chart of a task data processing method based on edge calculation according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a task data processing device based on edge computation according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Reference numerals illustrate: 21. an acquisition module; 22. a judging module; 23. a processing module; 300. an electronic device; 301. a processor; 302. a memory; 303. a user interface; 304. a network interface; 305. a communication bus.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present application, but not all embodiments.
The terminology used in the following embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of this application, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates to the contrary. It should also be understood that the term "and/or" as used in this application refers to and encompasses any or all possible combinations of one or more of the listed items.
The terms "first," "second," and the like, are used below for descriptive purposes only and are not to be construed as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature, and in the description of embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
In order to make the technical scheme of the present invention better understood by those skilled in the art, the present invention will be further described in detail with reference to the accompanying drawings.
Referring to fig. 1, a flow chart of a task data processing method based on edge computing according to an embodiment of the present invention is shown, and the method is applied to a server, and the flow chart mainly includes the following steps: s101 to S103.
Step S101, task data operated by a user terminal is obtained, wherein the task data comprise first task data and second task data.
Specifically, the server acquires task data of a user when playing a game in real time, wherein the task data comprises first task data and second task data. These task data include, but are not limited to: game state data when the user plays a game, task progress data, position data of a role played by the user, action data of a role played by the user, and the like. The task data is closely related to game play, and the user terminal can normally play the game content by sending the task data to the server, processing the task data by the server and returning the processing result to the user terminal. For convenience of description, the following embodiments will be described taking task data in the same game as an example.
Step S102, judging whether the first data processing difficulty of the first task data is greater than a preset processing difficulty; judging whether the second data processing difficulty of the second task data is smaller than or equal to the preset processing difficulty.
Specifically, when a user plays the same game, different task data in the game have different data processing difficulties, when a server receives a plurality of task data sent by a user terminal, the server needs to judge whether the data processing difficulty corresponding to the task data is greater than a preset processing difficulty or not, and then performs data processing operation on the task data according to the compared situation. If the first task data corresponds to the first data processing difficulty and the second task data corresponds to the second data processing difficulty, the server needs to judge the magnitude relation between the first data processing difficulty and the preset processing difficulty and judge the magnitude relation between the second data processing difficulty and the preset processing difficulty.
In one possible implementation, step S102 further includes: the data processing difficulty includes motion data, animation data, and dynamic adjustment data.
Specifically, since the task data is data of various types sent by the user terminal to the server when the user plays the game, the task data may be data of various types, including but not limited to: motion data, animation data, dynamic adjustment data and the like, wherein the motion data is data and the like when a game role played by a user moves and releases skills, the animation data comprises game CG data triggered when the game role played by the user enters a related scenario, and the dynamic adjustment data is refreshing data of enemy, refreshing data of materials on a game map and the like. The server distributes different data processing difficulties for the task data, the server can distribute the data processing difficulties according to various factors such as the calculation complexity, the data size, the real-time requirement and the like of the task data, the data difficulty can be distributed into a plurality of data processing difficulties, and then one of the data processing difficulties is selected as a preset processing difficulty.
For example, it is assumed that the data processing difficulty is now classified into 5 levels, at this time, if the preset processing difficulty is difficulty 4, the server obtains the first task data corresponding to the first data processing difficulty, and if the preset processing difficulty is difficulty 5, the server obtains the second task data corresponding to the first data processing difficulty, and if the preset processing difficulty is difficulty 3, the server determines the magnitude relation between the difficulty 3, the difficulty 5 and the difficulty 4.
Step S103, if the first data processing difficulty is greater than the preset processing difficulty, the first task data are sent to a cloud server for processing; and if the second data processing difficulty is smaller than the preset processing difficulty, sending the second task data to an edge computing server for processing.
Specifically, the server distributes task data with the processing difficulty greater than the preset processing difficulty to the cloud server for data processing by comparing the data processing difficulty of the task data with the preset processing difficulty, and distributes task data with the processing difficulty less than or equal to the preset processing difficulty to the edge computing server for processing.
For example, taking the example in step S102 as an example, the first data processing difficulty of the first task data is difficulty 5, and since the difficulty 5 is greater than the preset processing difficulty, namely difficulty 4, the first task data is allocated to the cloud server for processing; and the second data processing difficulty of the second task data is difficulty 3, and the second task data is distributed to the edge computing server for processing because the difficulty 3 is smaller than the preset processing difficulty, namely difficulty 4.
In one possible implementation, step S103 further includes: the data processing capacity of the edge computing server is lower than that of the cloud server, and the transmission efficiency of the edge computing server is higher than that of the cloud server.
Specifically, the edge computing server is close to the user, can respond to the request of the user more quickly, provides low-delay experience, and simultaneously leaves task data with more complex computation, namely task data with higher data processing difficulty than preset processing difficulty, for the cloud server with stronger processing performance to process.
In a possible implementation manner, before step S103, the method further includes: acquiring a plurality of edge computing servers in a preset range of a user terminal, wherein the plurality of edge computing servers comprise a first edge computing server and a second edge computing server, and the first edge computing server and the second edge computing server are any two different edge computing servers in the plurality of edge computing servers; judging whether the first transmission efficiency of the first edge computing server is greater than the second transmission efficiency of the second edge computing server; and if the first transmission efficiency of the first edge computing server is greater than the second transmission efficiency of the second edge computing server, selecting the first edge computing server as a main edge computing server for processing task data.
Specifically, a plurality of edge calculation servers may be provided around the user, and the plurality of edge calculation servers may process task data assigned to the edge calculation servers by the servers at the same time, but in order to secure the stability of operation, a main edge calculation server needs to be selected from the plurality of edge calculation servers. If a plurality of edge computing servers exist around the user, the servers respectively test the edge computing servers at the moment, and acquire the edge computing server with the highest transmission rate of the edge computing servers as a main edge computing server for processing task data.
In one possible implementation, step S103 further includes: transmitting third task data to a plurality of edge computing servers through a user terminal; acquiring a plurality of processing times required by the third task data at a plurality of edge computing servers, wherein the plurality of processing times are the sum of the time required by the plurality of edge computing servers to receive the third task data, the time required by processing the third task data and the time required by returning a processing result to the user terminal; the plurality of processing times comprise a first processing time and a second processing time, wherein the first processing time is the processing time corresponding to the first edge computing server, and the second processing time is the processing time corresponding to the second edge computing server; and judging whether the first processing time is longer than the second processing time.
Specifically, since the computing performance of different edge computing servers is different and the distance from the user terminal is also different, multiple factors determine that the transmission rates of different edge servers to the user terminal are different, and the server can obtain the transmission rates of the edge computing servers in multiple modes. The server may send the same task data to a plurality of edge servers within a preset range through the user terminal, where the task data may be task data only used for testing a transmission rate, and respectively count the time taken from sending the task data to receiving the task data by the edge server in different edge computing servers, the time taken by the edge computing servers to process the task data, and the sum of the time taken by the edge computing servers to return the processing result to the user terminal, and the server may perform multiple statistics on different edge computing servers and calculate the sum of average times of multiple statistics of different edge computing servers. The server takes the edge computing server with the smallest sum of average time as the edge computing server with the highest transmission rate, and takes the edge computing server as the main edge computing server. It should be noted that, other edge computing servers that are not divided into the main edge computing servers may also process the corresponding task data, for example, if the example in step S102 is taken as an example, the performance of the edge computing servers may be ranked and classified into three levels of high, middle and low, where the high level edge server is used for processing the task data of difficulty 3 and difficulty 4, the middle level edge server is used for processing the task data of difficulty 2, and the low level edge server is used for processing the task data of difficulty 1.
In a possible implementation manner, after step S103, the method further includes: acquiring a plurality of operation states of a plurality of edge computing servers at intervals of preset time periods, wherein the operation states comprise a first operation state and a second operation state, the first operation state is an operation state corresponding to the first edge computing server, and the second operation state is an operation state of the second edge computing server; judging whether the first operation state has operation faults, wherein the operation faults comprise firmware faults, storage faults and network faults; if the first running state has running fault, judging whether the second running state has running fault or not; and if the second operation state does not have operation faults, switching the main edge computing server of the task data to the second edge computing server.
Specifically, the server obtains the operation states of all the edge computing servers every preset time period, where the preset time period may be 5 minutes, 15 minutes, or 20 minutes, and may be set according to practical situations, and this embodiment is not limited herein, and the server checks whether there is an edge server with an operation failure in all the edge servers, where the operation failure includes, but is not limited to, a firmware failure, a storage failure, a network failure, and so on. If the edge server with the operation failure is the main edge computing server, the server can switch the main edge server to other edge servers, and the edge server with the highest operation rate.
In a possible implementation manner, after step S103, the method further includes: obtaining a game experience influence level of the second task data; judging whether the game experience influence level is larger than a preset influence level or not; and if the game experience influence level is greater than the preset influence level, sending the second task data to the cloud server for processing.
Specifically, after the server distributes task data with the data processing difficulty smaller than or equal to the preset processing difficulty to the edge computing server, the server also judges whether data with the game experience influence level larger than the preset influence level exists in the task data, if so, the server can change the processing mode of the task data, and distributes the task data with the influence level larger than the preset influence level to the edge computing server for processing.
In one possible implementation, step S103 further includes: the game experience impact level includes a mission importance level, an NPC importance level, and a scenario level.
In particular, the game experience influence level includes, but is not limited to, a task importance level, an NPC importance level, a scenario level, and the like, which may be important substantial game content in a game, and because the content influences the game experience of a user, it is required to ensure that game data corresponding to the content is processed as soon as possible and sent to a user terminal, and the cloud server may preferentially process the received higher game experience influence level. In this embodiment, the game experience influence level of the task data may be set according to different games, and the setting of the game experience influence level of the task data in this embodiment is not limited.
By adopting the method, the beneficial effects which can be achieved include at least one of the following:
1. the method comprises the steps of setting preset processing difficulty, judging the size relation between the data processing difficulty of task data and the preset processing difficulty, sending the task data with the data processing difficulty greater than the preset processing difficulty to a cloud server for processing, namely, first task data, and sending game character data with the data processing difficulty less than or equal to the preset processing difficulty to an edge computing server for processing, so that the edge computing server can stably process a part of fixed task data, and the game stability of a user in the cloud game is improved.
2. The periphery of the user can be provided with a plurality of edge computing servers, the plurality of edge computing servers can process task data distributed to the edge computing servers by the servers at the same time, and the edge server with highest transmission efficiency is selected from the plurality of edge computing servers to serve as a main edge computing server, so that the user can obtain lower delay and faster transmission efficiency when playing games, and the stability of task data transmission when playing games can be ensured.
3. The server monitors the operation states of the plurality of edge computing servers in real time, and when the operation states of the main edge computing servers have operation faults, the main edge computing servers can be switched to the edge computing server with the highest data transmission efficiency in other edge computing servers, so that smooth operation of a user in game playing is ensured.
Referring to fig. 2, an apparatus for processing task data based on edge computation according to an embodiment of the present invention is a server, where the server includes an obtaining module 21, a judging module 22 and a processing module 23,
an acquisition module 21, configured to acquire task data running on the user terminal, where the task data includes first task data and second task data;
a judging module 22, configured to judge whether the first data processing difficulty of the first task data is greater than a preset processing difficulty; judging whether the second data processing difficulty of the second task data is smaller than or equal to the preset processing difficulty;
the processing module 23 is configured to send the first task data to the cloud server for processing if the first data processing difficulty is greater than a preset processing difficulty; and if the second data processing difficulty is smaller than or equal to the preset processing difficulty, sending the second task data to the edge computing server for processing.
In a possible implementation manner, the processing module 23 is configured to obtain a plurality of edge computing servers within a preset range of the user terminal before sending the second task data to the edge computing server for processing if the second data processing difficulty is less than or equal to the preset processing difficulty, where the plurality of edge computing servers includes a first edge computing server and a second edge computing server, and the first edge computing server and the second edge computing server are any two different edge computing servers in the plurality of edge computing servers; judging whether the first transmission efficiency of the first edge computing server is greater than the second transmission efficiency of the second edge computing server; and if the first transmission efficiency of the first edge computing server is greater than the second transmission efficiency of the second edge computing server, selecting the first edge computing server as a main edge computing server for processing task data.
In one possible implementation, the processing module 23 is configured to select the first edge computing server as the main edge computing server for processing the task data if the first transmission efficiency of the first edge computing server is greater than the second transmission efficiency of the second edge computing server, and specifically includes: transmitting third task data to a plurality of edge computing servers through a user terminal; acquiring a plurality of processing times required by the third task data at a plurality of edge computing servers, wherein the plurality of processing times are the sum of the time required by the plurality of edge computing servers to receive the third task data, the time required by processing the third task data and the time required by returning a processing result to the user terminal; the plurality of processing times comprise a first processing time and a second processing time, wherein the first processing time is the processing time corresponding to the first edge computing server, and the second processing time is the processing time corresponding to the second edge computing server; judging whether the first processing time is longer than the second processing time; if the first processing time is longer than the second processing time, the first edge computing server is selected as a main edge computing server for processing task data.
In a possible implementation manner, the processing module 23 is configured to, after selecting the first edge computing server as the primary edge computing server for processing the task data if the first transmission efficiency of the first edge computing server is greater than the second transmission efficiency of the second edge computing server, further include: acquiring a plurality of operation states of a plurality of edge computing servers at intervals of preset time periods, wherein the operation states comprise a first operation state and a second operation state, the first operation state is an operation state corresponding to the first edge computing server, and the second operation state is an operation state of the second edge computing server; judging whether the first operation state has operation faults, wherein the operation faults comprise firmware faults, storage faults and network faults; if the first running state has running fault, judging whether the second running state has running fault or not; and if the second operation state does not have operation faults, switching the main edge computing server of the task data to the second edge computing server.
In one possible implementation manner, the processing module 23 is configured to send the first task data to the cloud server for processing if the first data processing difficulty is greater than the preset processing difficulty and the second data processing difficulty is less than or equal to the preset processing difficulty, and after sending the second task data to the edge computing server for processing, the method further includes: obtaining a game experience influence level of the second task data; judging whether the game experience influence level is larger than a preset influence level or not; and if the game experience influence level is greater than the preset influence level, sending the second task data to the cloud server for processing.
The application also discloses an electronic device comprising a processor, a memory, a user interface and a network interface, the memory being for storing instructions, the user interface and the network interface being for communicating to other devices, the processor being for executing the instructions stored in the memory to cause the electronic device to perform a method as described in any one of the above.
It should be noted that: in the device provided in the above embodiment, when implementing the functions thereof, only the division of the above functional modules is used as an example, in practical application, the above functional allocation may be implemented by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to implement all or part of the functions described above. In addition, the embodiments of the apparatus and the method provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the embodiments of the method are detailed in the method embodiments, which are not repeated herein.
The application also discloses electronic equipment. Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to the disclosure in an embodiment of the present application. The electronic device 300 may include: at least one processor 301, a memory 302, a user interface 303, at least one network interface 304, at least one communication bus 305.
Wherein a communication bus 305 is used to enable connected communications between these components.
The user interface 303 may include a Display screen (Display), a Camera (Camera), and the optional user interface 303 may further include a standard wired interface, and a wireless interface.
The network interface 304 may include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Wherein the processor 301 may include one or more processing cores. The processor 301 utilizes various interfaces and lines to connect various portions of the overall solid state disk, perform various functions of the solid state disk and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 302, and invoking data stored in the memory 302. Alternatively, the processor 301 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 301 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 301 and may be implemented by a single chip.
The Memory 302 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 302 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 302 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 302 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described various method embodiments, etc.; the storage data area may store data or the like involved in the above respective method embodiments. The memory 302 may also be at least one memory device located remotely from the aforementioned processor 301. Referring to FIG. 3, an operating system, network communication modules, user interface modules, and edge computing based task data processing applications can be included in memory 302, which is a type of computer storage medium.
In the electronic device 300 shown in fig. 3, the user interface 303 is mainly used for providing an input interface for a user, and acquiring data input by the user; and processor 301 may be configured to invoke the memory 302 to store an edge-calculation-based task data processing application that, when executed by one or more processors 301, causes electronic device 300 to perform the method as described in one or more of the embodiments above. It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided herein, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, such as a division of units, merely a division of logic functions, and there may be additional divisions in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some service interface, device or unit indirect coupling or communication connection, electrical or otherwise.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a memory, including several instructions for causing a computer device (which may be a personal computer, a solid state disk, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned memory includes: various media capable of storing program codes, such as a U disk, a mobile hard disk, a magnetic disk or an optical disk.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital subscriber line) or wireless (e.g., infrared, wireless, microwave, etc.) means from one website, computer, solid state disk, or data center. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a solid state disk, a data center, or the like, that contains one or more integration of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described method embodiments may be accomplished by a computer program to instruct related hardware, the program may be stored in a computer readable storage medium, and the program may include the above-described method embodiments when executed. And the aforementioned storage medium includes: ROM or random access memory RAM, magnetic or optical disk, etc.
The above embodiments are merely for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A method for processing task data based on edge computation, the method comprising:
task data operated by a user terminal is obtained, wherein the task data comprises first task data and second task data;
Judging whether the first data processing difficulty of the first task data is greater than a preset processing difficulty or not; judging whether the second data processing difficulty of the second task data is smaller than or equal to the preset processing difficulty;
if the first data processing difficulty is greater than the preset processing difficulty, the first task data is sent to a cloud server for processing; and if the second data processing difficulty is smaller than or equal to the preset processing difficulty, sending the second task data to an edge computing server for processing.
2. The method of claim 1, wherein the edge computing server has a lower data processing capacity than the cloud server, and wherein the edge computing server has a higher transmission efficiency than the cloud server.
3. The method of claim 1, wherein before the sending the second task data to an edge computing server for processing if the second data processing difficulty is less than the preset processing difficulty, the method further comprises:
acquiring a plurality of edge computing servers in a preset range of the user terminal, wherein the plurality of edge computing servers comprise a first edge computing server and a second edge computing server, and the first edge computing server and the second edge computing server are any two different edge computing servers in the plurality of edge computing servers;
Judging whether the first transmission efficiency of the first edge computing server is greater than the second transmission efficiency of the second edge computing server;
and if the first transmission efficiency of the first edge computing server is greater than the second transmission efficiency of the second edge computing server, selecting the first edge computing server as a main edge computing server for processing the task data.
4. A method according to claim 3, wherein if the first transmission efficiency of the first edge computing server is greater than the second transmission efficiency of the second edge computing server, selecting the first edge computing server as the primary edge computing server for processing the task data, specifically comprises:
transmitting third task data to a plurality of edge computing servers through the user terminal;
acquiring a plurality of processing times required by the third task data at a plurality of edge computing servers, wherein the plurality of processing times are the sum of the time required by the plurality of edge computing servers to receive the third task data, the time required by processing the third task data and the time required by returning a processing result to the user terminal; the processing time comprises a first processing time and a second processing time, wherein the first processing time is the processing time corresponding to the first edge computing server, and the second processing time is the processing time corresponding to the second edge computing server;
Judging whether the first processing time is longer than the second processing time;
and if the first processing time is longer than the second processing time, selecting the first edge computing server as a main edge computing server for processing the task data.
5. The method of claim 3, wherein after the selecting the first edge computing server as the primary edge computing server that processes the task data if the first transmission efficiency of the first edge computing server is greater than the second transmission efficiency of the second edge computing server, the method further comprises:
acquiring a plurality of running states of a plurality of edge computing servers at intervals of preset time periods, wherein the running states comprise a first running state and a second running state, the first running state is a running state corresponding to the first edge computing server, and the second running state is a running state of the second edge computing server;
judging whether the first operation state has an operation fault or not, wherein the operation fault comprises a firmware fault, a storage fault and a network fault;
If the first operation state has the operation fault, judging whether the second operation state has the operation fault or not;
and if the second operation state does not generate the operation fault, switching the main edge computing server of the task data to the second edge computing server.
6. The method of claim 1, wherein after the first task data is sent to a cloud server for processing and the second task data is sent to an edge computing server for processing if the first data processing difficulty is greater than the preset processing difficulty and the second data processing difficulty is less than the preset processing difficulty, the method further comprises:
obtaining a game experience influence level of the second task data;
judging whether the game experience influence level is larger than a preset influence level or not;
and if the game experience influence level is greater than the preset influence level, sending the second task data to the cloud server for processing.
7. The method of claim 6, wherein the step of providing the first layer comprises,
the data processing difficulty degree comprises action data, animation data and dynamic adjustment data;
The game experience impact level includes a mission importance level, an NPC importance level, and a scenario level.
8. The task data processing device based on edge calculation is characterized by being a server, wherein the server comprises an acquisition module (21), a judgment module (22) and a processing module (23),
the acquisition module (21) is used for acquiring task data operated by the user terminal, wherein the task data comprises first task data and second task data;
the judging module (22) is used for judging whether the first data processing difficulty of the first task data is greater than a preset processing difficulty or not; judging whether the second data processing difficulty of the second task data is smaller than or equal to the preset processing difficulty;
the processing module (23) is configured to send the first task data to a cloud server for processing if the first data processing difficulty is greater than the preset processing difficulty; and if the second data processing difficulty is smaller than the preset processing difficulty, sending the second task data to an edge computing server for processing.
9. An electronic device comprising a processor (301), a memory (302), a user interface (303), a network interface (304) and a communication bus (305), the memory (302) being adapted to store instructions, the user interface (303) and the network interface (304) being adapted to communicate to other devices, the processor (301) being adapted to execute the instructions stored in the memory (302) to cause the electronic device (300) to perform the method according to any one of claims 1 to 7.
10. A computer readable storage medium storing instructions which, when executed, perform the method of any one of claims 1 to 7.
CN202311392596.XA 2023-10-25 2023-10-25 Task data processing method and device based on edge calculation and electronic equipment Pending CN117453396A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311392596.XA CN117453396A (en) 2023-10-25 2023-10-25 Task data processing method and device based on edge calculation and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311392596.XA CN117453396A (en) 2023-10-25 2023-10-25 Task data processing method and device based on edge calculation and electronic equipment

Publications (1)

Publication Number Publication Date
CN117453396A true CN117453396A (en) 2024-01-26

Family

ID=89584775

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311392596.XA Pending CN117453396A (en) 2023-10-25 2023-10-25 Task data processing method and device based on edge calculation and electronic equipment

Country Status (1)

Country Link
CN (1) CN117453396A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117857555A (en) * 2024-03-05 2024-04-09 浙江万雾信息科技有限公司 Data sharing method and system based on edge calculation

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117857555A (en) * 2024-03-05 2024-04-09 浙江万雾信息科技有限公司 Data sharing method and system based on edge calculation
CN117857555B (en) * 2024-03-05 2024-05-14 浙江万雾信息科技有限公司 Data sharing method and system based on edge calculation

Similar Documents

Publication Publication Date Title
CA2814420C (en) Load balancing between general purpose processors and graphics processors
WO2022222755A1 (en) Service processing method and apparatus, and storage medium
US9237115B2 (en) Load balancing in cloud-based game system
CN110333947B (en) Method, device, equipment and medium for loading subcontracting resources of game application
CN109889576B (en) Mobile cloud game resource optimization method based on game theory
CN113434300B (en) Data processing method and related device
CN108499100B (en) Cloud game error recovery method and system based on edge calculation
JP2021514754A (en) Awarding incentives to players to participate in competitive gameplay
US20220226736A1 (en) Selection of virtual server for smart cloud gaming application from multiple cloud providers based on user parameters
CN117453396A (en) Task data processing method and device based on edge calculation and electronic equipment
CN107920108A (en) A kind of method for pushing of media resource, client and server
WO2023107283A1 (en) Network storage game allocation based on artificial intelligence
CN110147277A (en) A kind of resource dynamic deployment method, device, server and storage medium
CN111249747B (en) Information processing method and device in game
CN111148278A (en) Data transmission method, device, storage medium and electronic equipment
US20150106497A1 (en) Communication destination determination apparatus, communication destination determination method, communication destination determination program, and game system
CN113098763B (en) Instant communication message sending method, device, storage medium and equipment
CN111930724B (en) Data migration method and device, storage medium and electronic equipment
CN112604267B (en) Game processing method, system, device, equipment and medium
CN110251943B (en) Game player matching method, device, equipment and storage medium
CN113992572B (en) Routing method, device and storage medium for shared storage resource path in heterogeneous network
US20230199062A1 (en) Data center wide network storage load balancing
US11465045B1 (en) Maintaining session state using redundant servers
CN114679596B (en) Interaction method and device based on game live broadcast, electronic equipment and storage medium
US11167212B1 (en) Maintaining session state using redundant servers

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination