CN116860436A - Thread data processing method, device, equipment and storage medium - Google Patents

Thread data processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN116860436A
CN116860436A CN202310714913.9A CN202310714913A CN116860436A CN 116860436 A CN116860436 A CN 116860436A CN 202310714913 A CN202310714913 A CN 202310714913A CN 116860436 A CN116860436 A CN 116860436A
Authority
CN
China
Prior art keywords
thread
data
current
threads
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310714913.9A
Other languages
Chinese (zh)
Inventor
张志运
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Zhizhu Daxun Communication Co ltd
Original Assignee
Chongqing Zhizhu Daxun Communication Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Zhizhu Daxun Communication Co ltd filed Critical Chongqing Zhizhu Daxun Communication Co ltd
Priority to CN202310714913.9A priority Critical patent/CN116860436A/en
Publication of CN116860436A publication Critical patent/CN116860436A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/526Mutual exclusion algorithms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)

Abstract

The application belongs to the technical field of computers, and particularly relates to a thread data processing method, a thread data processing device, thread data processing equipment and a storage medium. The method comprises the following steps: under the condition that the first thread is operated to obtain the data to be processed of the original second thread, determining a target processing thread from a plurality of current second threads; the target processing thread is a thread meeting the preset processing requirement; the plurality of current second threads are threads created based on the original second threads under the condition that the original second threads are overloaded; and sending a target processing instruction to the first thread, wherein the target processing instruction instructs the first thread to send the data to be processed of the original second thread to the target processing thread, so that the target processing thread is operated to process the data to be processed. According to the application, a plurality of new threads are created according to the original threads when the threads are overloaded, and the communication between the plurality of new threads and the upstream and downstream threads is scheduled through the scheduling module, so that the load balancing scheduling of the threads is realized, and meanwhile, the secondary modification of codes is avoided.

Description

Thread data processing method, device, equipment and storage medium
Technical Field
The application belongs to the technical field of computers, and particularly relates to a thread data processing method, a thread data processing device, thread data processing equipment and a storage medium.
Background
With the rapid development of information technology, the user volume and the data volume are rapidly increased in recent years, so that the data processing capacity and the load balancing capacity of a computer system are tested. In applications of the embedded Linux system, a multi-core multithreading system is generally utilized for processing and communication transmission of data or messages. The execution of each thread on the multi-core can be scheduled by a system, and the thread can be specified by a designer to be executed on the specified core. However, when there is an overload condition for a load of a thread running on a single core, whether the core is designated to run or the system is scheduled, the multi-core multi-threaded system has no way to solve the thread overload problem.
The existing method for solving the overload problem in the multi-core multi-thread system mainly adopts a code modification mode to integrally cut the overloaded thread into two sub-threads, wherein the two sub-threads are in an upstream-downstream relationship, and the split two sub-threads are assigned to two different cores to run, so that the thread overload problem is solved. However, this method is relatively complex; moreover, it does not completely solve the thread overload problem, and in particular, if the system is again subjected to the overload problem after the modification is successful, the code needs to be modified twice, three times or even more, which results in a great deal of time consumption.
Disclosure of Invention
In order to solve the technical problems, the application provides a thread data processing method, a device, equipment and a storage medium. According to the application, a plurality of new threads are created according to the original threads when the threads are overloaded, and the communication between the plurality of new threads and the upstream and downstream threads is scheduled through the scheduling module, so that the load balancing scheduling of the threads is realized, and meanwhile, the secondary modification of codes is avoided.
In one aspect, the present application provides a thread data processing method, applied to a scheduling module, the method comprising:
under the condition that the first thread is operated to obtain the data to be processed of the original second thread, determining a target processing thread from a plurality of current second threads; the target processing thread is a thread meeting the preset processing requirement; the plurality of current second threads are threads created based on the original second threads under the condition that the original second threads are overloaded; the processing capacity of each current second thread is consistent with that of the original second thread;
and sending a target processing instruction to the first thread, wherein the target processing instruction instructs the first thread to send the data to be processed of the original second thread to the target processing thread, so that the target processing thread is operated to process the data to be processed.
Optionally, before determining the target processing thread from the plurality of current second threads, in a case where the first thread is executed to obtain the data to be processed of the original second thread, the method further includes: creating a plurality of current second threads based on the original second threads in case of overload of the original second threads; each current second thread and the original second thread have the same processing power.
Optionally, determining the target processing thread from the plurality of current second threads includes: determining the thread load of each current second thread; and determining a target load meeting the first preset load requirement from the thread load of each current second thread, and determining the current second thread corresponding to the target load as a target processing thread.
Optionally, determining the target processing thread from the plurality of current second threads includes: determining a data object of the data to be processed of the original second thread; based on the data object, a target processing thread is determined from the plurality of current second threads.
Optionally, determining, based on the data object, a target processing thread from the plurality of current second threads includes: under the condition that the data object meets the first preset object requirement, determining the thread load of each current second thread; determining a target load meeting a second preset load requirement from the thread load of each current second thread, and determining the current second thread corresponding to the target load as a target processing thread; or alternatively; and under the condition that the data object meets the second preset object requirement, determining the thread which processes the data of the data object in the plurality of current second threads as a target processing thread.
Optionally, the scheduling module is located in the first thread, and the method further includes: receiving a thread selection request sent by a third thread; the thread selection request is generated after the data to be processed of the third thread is processed by the third thread to obtain return data; the data to be processed of the third thread is obtained by processing the data to be processed of the original second thread by the target processing thread; determining a target return thread from a plurality of current second threads; the target return thread is a thread meeting preset return requirements; sending thread selection information to a third thread; the thread selection information includes information of the target return thread, the thread selection information being used to instruct the third thread to send return data to the target return thread.
Optionally, determining the target return thread from the current second threads of the plurality of original second threads includes: determining the thread load of each current second thread; determining a target load meeting a third preset load requirement from the thread load of each current second thread, and determining the current second thread of the original second thread corresponding to the target load as a target return thread; or alternatively; determining a thread which is processed by the data of the data object in the plurality of current second threads as a target return thread; wherein the thread selection request includes information of the data object that returned the information.
In another aspect, an embodiment of the present application provides a thread data processing apparatus, including:
the determining module is used for determining a target processing thread from a plurality of current second threads under the condition that the first thread is operated to obtain data to be processed of the original second thread; the target processing thread is a thread meeting the preset processing requirement; the plurality of current second threads are threads created based on the original second threads under the condition that the original second threads are overloaded; the processing capacity of each current second thread is consistent with that of the original second thread;
and the sending module is used for sending a target processing instruction to the first thread, and the target processing instruction instructs the first thread to send the data to be processed of the original second thread to the target processing thread so that the target processing thread is operated to process the data to be processed.
Optionally, the determining module is configured to: under the condition that the first thread is operated to obtain the data to be processed of the original second thread, before determining a target processing thread from a plurality of current second threads, under the condition that the original second thread is overloaded, creating a plurality of current second threads based on the original second thread; each current second thread and the original second thread have the same processing power.
Optionally, the determining module is configured to: determining the thread load of each current second thread in the process of determining the target processing thread from a plurality of current second threads; and determining a target load meeting the first preset load requirement from the thread load of each current second thread, and determining the current second thread corresponding to the target load as a target processing thread.
Optionally, the determining module is configured to: in the process of determining a target processing thread from a plurality of current second threads, determining a data object of data to be processed of an original second thread; based on the data object, a target processing thread is determined from the plurality of current second threads.
Optionally, the determining module is configured to determine, in determining, based on the data object, a thread load of each current second thread when the data object meets a first preset object requirement in a process of determining a target processing thread from a plurality of current second threads; determining a target load meeting a second preset load requirement from the thread load of each current second thread, and determining the current second thread corresponding to the target load as a target processing thread; or alternatively; and under the condition that the data object meets the second preset object requirement, determining the thread which processes the data of the data object in the plurality of current second threads as a target processing thread.
Optionally, the scheduling module is located in the first thread, and the determining module is configured to: receiving a thread selection request sent by a third thread; the thread selection request is generated after the data to be processed of the third thread is processed by the third thread to obtain return data; the data to be processed of the third thread is obtained by processing the data to be processed of the original second thread by the target processing thread; determining a target return thread from a plurality of current second threads; the target return thread is a thread meeting preset return requirements; sending thread selection information to a third thread; the thread selection information includes information of the target return thread, the thread selection information being used to instruct the third thread to send return data to the target return thread.
Optionally, the determining module is configured to: determining the thread load of each current second thread in the process of determining a target return thread from the current second threads of the plurality of original second threads; determining a target load meeting a third preset load requirement from the thread load of each current second thread, and determining the current second thread of the original second thread corresponding to the target load as a target return thread; or alternatively; determining a thread which is processed by the data of the data object in the plurality of current second threads as a target return thread; wherein the thread selection request includes information of the data object that returned the information.
In another aspect, the present application provides an electronic device for processing thread data, where the electronic device includes a processor and a memory, and at least one instruction or at least one program is stored in the memory, and the at least one instruction or at least one program is loaded and executed by the processor to implement a thread data processing method as described above.
In another aspect, the present application provides a computer readable storage medium having stored therein at least one instruction or at least one program loaded and executed by a processor to implement a thread data processing method as described above.
According to the thread data processing method, the device, the electronic equipment and the storage medium, the scheduling module is added, so that the scheduling module can create a plurality of thread entities based on the thread under the condition of overload of the thread, and the thread entities are distributed to different cores for processing, so that different cores share the load of the thread. For scheduling message traffic between the thread and its upstream and downstream threads. The scheduling module creates a plurality of thread entities based on the threads needing to balance the load, determines which thread entity the upstream thread sends the message to be processed to, and determines which thread entity the downstream thread sends the return message to, so that the message is distributed to the proper thread entity for processing according to the specific condition of the message to be processed, and load balancing among the plurality of thread entities is realized. Furthermore, the embodiment of the application removes the mutual exclusion lock under the condition that the read operation of the thread is more than the write operation, and the adoption of the read-write lock can avoid the extra load increase caused by the mutual exclusion lock, thereby further reducing the thread load.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions and advantages of the prior art, the following description will briefly introduce the drawings that are needed in the embodiments or the prior art description, it is obvious that the drawings in the following description are only some embodiments of the application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art;
FIG. 1 is a schematic diagram illustrating an implementation environment for a thread data processing method, according to an example embodiment;
FIG. 2 is a flow diagram illustrating a method of thread data processing according to an example embodiment;
FIG. 3 is a flow diagram illustrating a method of thread data processing according to an example embodiment;
FIG. 4 is a flow diagram illustrating a method of thread data processing according to an example embodiment;
FIG. 5 is a flow diagram illustrating a method of thread data processing according to an example embodiment;
FIG. 6 is a flow diagram illustrating a method of thread data processing according to an example embodiment;
FIG. 7 is a flowchart illustrating a method of thread data processing according to an example embodiment;
FIG. 8 is a block diagram of a thread data processing apparatus according to an illustrative embodiment.
Fig. 9 is a hardware configuration block diagram of a server of a thread data processing method according to an exemplary embodiment.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the embodiments of the present application and the above-described drawings are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In order to make the objects, technical solutions and advantages of the embodiments of the present application more apparent, the embodiments of the present application will be further described in detail below with reference to the accompanying drawings and examples. It should be understood that the detailed description and specific examples, while indicating the embodiment of the application, are intended for purposes of illustration only and are not intended to limit the scope of the application.
The embedded Linux system is basically a multi-core multi-thread system, wherein the operation of each thread on the multi-core can be scheduled by the system, and the thread can be specified by a designer to operate on one core in the multi-core. However, when a single thread is overloaded, i.e., a single thread runs up to 100% of the load on a single core, no other core has a way to share the load on the overloaded core, either as specified by the designer or as scheduled by the system, nor has it been possible to solve the overload problem for that thread. When this problem is encountered, the existing method generally splits a thread into two threads having an upstream-downstream relationship. For example, when the thread processing flow is "A-B-C-D", the threads may be split into "A-B" and "C-D", and intra-thread function calls between "B-C" may be changed to inter-thread messaging. The method is troublesome; if overload condition occurs after the primary splitting, the thread has to be split again; furthermore, there is also a case where there is load imbalance between split threads.
Based on this, the embodiment of the application provides a thread data processing method, which is used for scheduling the threads needing to be balanced in load and the message communication between the upstream threads and the downstream threads by adding a scheduling module. The scheduling module creates a plurality of thread entities based on the threads needing to balance the load, determines which thread entity the upstream thread sends the message to be processed to process, and determines which thread entity the downstream thread sends the return message to, so that the message is distributed to the proper thread entity to process according to the specific condition of the message to be processed, and load balancing among the plurality of thread entities is realized.
FIG. 1 is a schematic diagram illustrating an implementation environment for a thread data processing method, according to an example embodiment. As shown in fig. 1, the implementation environment may include at least a client 01 and a server 02, where the client 01 and the server 02 may be directly or indirectly connected through a wired or wireless communication manner, and the present application is not limited herein.
In particular, the client 01 may be used to receive input of text to be recognized, to present search results, etc. Alternatively, the client 01 may include a smart phone, a desktop computer, a tablet computer, a notebook computer, a digital assistant, an Augmented Reality (AR)/Virtual Reality (VR) device, a smart voice interaction device, a smart home appliance, a smart wearable device, a vehicle-mounted terminal device, or other type of physical device, and may also include software running in the physical device, such as an application program, or the like.
Specifically, the server 02 may be configured to process the backhaul data, including encoding and vectorizing each feature information, and then performing feature intersection and recommendation prediction, so as to obtain prediction recommendation results of multiple business objectives. The server 02 may also be configured to train a thread data processing model to send recommendation instructions to a recommendation system corresponding to the target object, where the recommendation instructionsCan be used for indicating a recommendation system to make information recommendation Optionally, the server 02 may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms. Specifically, the server may include an entity device, may include a network communication sub-module, a processor, a memory, and the like, may also include software running in the entity device, and may include an application program and the like.
It should be noted that fig. 1 is only an example. In other scenarios, other implementation environments may also be included.
FIG. 2 is a flow diagram illustrating a method of thread data processing according to an example embodiment. The method may be used in the implementation environment of fig. 1. The present specification provides the operational steps of the thread data processing method as described above, for example, in an embodiment or flowchart, but may include more or fewer operational steps based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one way of performing the order of steps and does not represent a unique order of execution. When implemented in a real system or server product, the methods illustrated in the embodiments or figures may be performed sequentially or in parallel (e.g., in a parallel processor or multithreaded environment). As shown in fig. 2, the method may include:
s101: and determining a target processing thread from the plurality of current second threads under the condition that the first thread is operated to obtain the data to be processed of the original second thread.
In step S101, in the case that the first thread is executed to process the to-be-processed of the first thread and obtain to-be-processed data of the second thread, the scheduling module may determine the target thread from a plurality of current second threads.
In an alternative embodiment, the plurality of current second threads are threads created based on the original second thread in the event that the original second thread is overloaded; wherein the processing power of each current second thread is consistent with that of the original second thread.
Optionally, before step S101, the method may further include: in the event that the original second thread is overloaded, a plurality of current second threads are created based on the original second thread. Alternatively, the scheduling module may create a plurality of current second threads based on the original second threads, while each current second thread is designated to run on a different core. Wherein the codes of the plurality of current second threads and the codes of the original second threads may be the same codes.
It should be noted that, in the embodiment of the present application, the plurality of current second threads refers to all current second threads created when overloaded.
In an alternative embodiment, the target processing thread may be a thread that meets a preset processing requirement among the plurality of current second threads. Alternatively, the preset processing requirement may be that the load of the current second thread meets the preset load requirement.
In the embodiment of the application, the scheduling module needs to determine the current second thread meeting the condition from the plurality of current second threads, and distributes the data to be processed of the original second thread to the current second thread meeting the condition for processing so as to avoid overload condition of the plurality of current second threads.
Alternatively, the scheduling module may be located in the first thread, the second thread, or the third thread. In other alternative embodiments, the scheduling module may be located at other positions in the system, which is not specified in the embodiment of the present application.
In the embodiment of the application, as the scheduling module creates a plurality of current second threads, the protection of global variables in the second threads needs to be done. Alternatively, a mutex lock or a read-write lock may be used, and the code range of the lock is as small as possible, thereby reducing the performance degradation due to the lock. Optionally, in the case that the number of times of reading the global variable is greater than the number of times of writing in the code, a read-write lock may be used, so that suspension of a read operation caused by the lock may be reduced, thereby improving processing efficiency of the second thread.
A specific embodiment of step S101 is further described below based on fig. 3. As one example, the target thread may be determined from a plurality of current second threads based on a first preset load requirement. Referring to fig. 3, fig. 3 is a flowchart illustrating a thread data processing method according to an exemplary embodiment, where, as illustrated in fig. 3, in a case where a first thread is executed to obtain data to be processed of an original second thread, determining a target processing thread from a plurality of current second threads includes:
S1011: a thread load for each current second thread is determined.
In step S1011, the scheduling module may determine the thread load of each current second thread. In an alternative embodiment, the thread load or the thread load rate of each current second thread may be determined, wherein the thread load rate is calculated based on the actual load and the nominal load. The actual load or the actual load rate may be load information of the core corresponding to the current second thread.
S1013: and determining a target load meeting the first preset load requirement from the thread load of each current second thread, and determining the current second thread corresponding to the target load as a target processing thread.
In step S1013, the scheduling module may determine a target load that meets the first preset load requirement from the thread loads of each current second thread, and determine the current second thread corresponding to the target load as the target processing thread.
Alternatively, the first preset load requirement may be that the thread load is less than the first preset load value. Alternatively, the first preset load requirement may be that the thread load rate is less than the first preset load value. Optionally, the first preset load requirement may also be that the thread load is the lowest thread load among the thread loads corresponding to the plurality of current second threads.
In an alternative embodiment, if the data to be processed is data to be processed in parallel, steps S1011-S1013 may be performed to allocate the data to be processed to the appropriate current second thread for processing, so as to avoid overloading the current second thread.
A specific embodiment of step S101 is further described below based on fig. 4. As one example, the target thread may be determined from a plurality of current second threads based on a data object of the data to be processed. Referring to fig. 4, fig. 4 is a flowchart of a thread data processing method according to an exemplary embodiment, where, as illustrated in fig. 4, in a case where a first thread is executed to obtain data to be processed of an original second thread, determining a target processing thread from a plurality of current second threads includes:
s1012: a data object of the raw second thread's data to be processed is determined.
In step S1012, a data object of the data to be processed of the original second thread may be determined.
S1014: based on the data object, a target processing thread is determined from the plurality of current second threads.
In step S1014, a current second thread corresponding to the data object of the plurality of current second threads may be determined as the target processing thread based on the data object.
Alternatively, there may be a correspondence between the data objects and the current second threads, alternatively, each data object may correspond to one current second thread, and each current second thread may correspond to one or more data objects, in which embodiment the scheduling module may determine the target processing thread from the plurality of current second threads based on the correspondence between the data objects and the second threads.
In an alternative embodiment, if the data to be processed is data requiring serial processing, steps S1012-S1014 may be performed to assign all messages or data of one object or one user to the specified current second thread.
As an example, in the case that a corresponding current second thread exists in a data object of data to be processed, the data object may be allocated to the corresponding current second thread to ensure that the data to be processed is processed by the thread that is processing or has processed the object or the user; and under the condition that the data object of the data to be processed does not have a corresponding current second thread, namely, under the condition that the data object is a new object, the data can be distributed according to the thread load. The following describes further the implementation of step S1014 based on fig. 5, and fig. 5 is a flow chart illustrating a thread data processing method according to an exemplary embodiment. As illustrated in fig. 5, determining a target processing thread from a plurality of current second threads based on the data object may include steps S10141-S10142 or S10143:
S10141: and under the condition that the data object meets the first preset object requirement, determining the thread load of each current second thread.
In step S10141, the thread load of each current second thread may be determined if the data object meets the first preset object requirement. Optionally, the first preset object requirement may include: the data object is an unprocessed data object or a data object which has no corresponding relation with the current second thread.
Alternatively, the thread load or the thread load rate of each current second thread may be determined, where the thread load rate is calculated based on the actual load and the rated load. The actual load or the actual load rate may be load information of the core corresponding to the current second thread.
S10142: and determining a target load meeting a second preset load requirement from the thread load of each current second thread, and determining the current second thread corresponding to the target load as a target processing thread.
In step S10142, a target load that meets the second preset load requirement may be determined from the thread load of each current second thread, and the current second thread corresponding to the target load is determined as the target processing thread. Alternatively, the second preset load requirement may be that the thread load is less than the second preset load value. Alternatively, the second preset load requirement may be that the thread load rate is less than the second preset load value. Optionally, the second preset load requirement may also be that the thread load is the lowest thread load among the thread loads corresponding to the plurality of current second threads.
In the embodiment of the application, the scheduling module can allocate the data to be processed according to the load condition under the condition that the data object is processed or has the corresponding relation with the thread, so that the data to be processed is allocated to the core with low load for processing, and the overload condition is avoided.
S10143: and under the condition that the data object meets the second preset object requirement, determining the thread which processes the data of the data object in the plurality of current second threads as a target processing thread.
In step S10143, in a case where the data object satisfies the second preset object requirement, a thread of the plurality of current second threads, which has processed the data of the data object, may be determined as the target processing thread. Optionally, the second preset object requirement may include: the data object is a processed data object or a data object with a corresponding relation with the current second thread.
In the embodiment of the application, the scheduling module can allocate the data to be processed according to the data object under the condition that the data object is unprocessed or the object corresponding to the thread does not exist, so that the data to be processed is allocated to the corresponding core for processing, and the normal processing of the serial data is ensured.
S102: a target processing instruction is sent to the first thread.
In step S102, the scheduling module may send a target processing instruction to the first thread. The target processing instruction may be configured to instruct the first thread to send the data to be processed of the original second thread to the target processing thread, so that the target processing thread is executed to process the data to be processed of the original second thread.
The embodiment of the application is used for scheduling the threads needing to be balanced in load and the message communication between the upstream threads and the downstream threads by adding the scheduling module. The scheduling module creates a plurality of thread entities based on the threads needing to balance the load, and determines which thread entity the upstream thread sends the message to be processed to process, so that the message is distributed to the proper thread entity to process according to the specific condition of the message to be processed, and the load balance among the plurality of thread entities is realized.
A thread data processing method according to an embodiment of the present application is further described below based on fig. 6. As one example, the scheduling module may be located within a first thread, which may provide a thread entity selection interface to a third thread. In this embodiment, the scheduling module may control the first thread to send the data to be processed to the specified current second thread, and after the current second thread processes the data to be processed to obtain the data to be processed by the third thread and sends the data to be processed by the third thread to the third thread, the third thread may query the scheduling module among the second threads for a target return thread in the plurality of current second threads and send the return data to the target return thread. Referring to fig. 6, fig. 6 is a flow chart illustrating a method of processing thread data according to an exemplary embodiment. As illustrated in fig. 6, a thread data processing method according to an embodiment of the present application includes:
S201: and determining a target processing thread from the plurality of current second threads under the condition that the first thread is operated to obtain the data to be processed of the original second thread.
The content of step S201 may refer to the explanation of step S101, which is not repeated here.
S202: a target processing instruction is sent to the first thread.
The content of step S202 may be referred to the above description of step S102, and will not be repeated here.
S203: and receiving a thread selection request sent by the third thread.
In step S203, the scheduling module may receive a thread selection request sent by the third thread through the interface. Optionally, the thread selection request is generated after the return data is obtained by processing the data to be processed of the third thread by the third thread, and the data to be processed of the third thread is obtained by processing the data to be processed of the original second thread by the target processing thread.
S204: a target return thread is determined from the plurality of current second threads.
In step S204, the scheduling module may determine a target return thread from the plurality of current second threads. The target return thread is a thread meeting preset return requirements. Alternatively, the preset return requirement may be that the load of the current second thread meets the preset load requirement.
A specific embodiment of step S204 is further described below based on fig. 7. FIG. 7 is a flow diagram illustrating a method of thread data processing according to an example embodiment. As illustrated in fig. 7, the exemplary flow of step S204 may include step S2041-step S2041 or step S2043:
s2041: a thread load for each current second thread is determined.
In step S2041, in the case where the data to be processed is data to be processed in parallel, the thread load of each current second thread may be determined. Alternatively, the thread load or the thread load rate of each current second thread may be determined, where the thread load rate is calculated based on the actual load and the rated load. The actual load or the actual load rate may be load information of the core corresponding to the current second thread.
S2042: and determining a target load meeting a third preset load requirement from the thread load of each current second thread, and determining the current second thread of the original second thread corresponding to the target load as a target return thread.
In step S2042, a target load satisfying the third preset load requirement may be determined from the thread load of each current second thread, and the current second thread of the original second thread corresponding to the target load may be determined as the target return thread. Alternatively, the third preset load requirement may be that the thread load is less than the third preset load value. Alternatively, the third preset load requirement may be that the thread load rate is less than the third preset load value. Optionally, the third preset load requirement may be that the thread load is the lowest thread load among the thread loads corresponding to the current third threads.
In the embodiment of the application, the scheduling module can distribute the returned data according to the load condition, so that the returned data is distributed to the cores with low load for receiving, and the overload condition is avoided.
S2043: a thread of the plurality of current second threads that processed data of the data object is determined to be a target return thread.
In this embodiment, the thread selection request in step S2041 includes information of a data object of return information.
In step S2043, in the case where the data to be processed is data to be processed in series, a thread of the data object processed in the plurality of current second threads may be determined as the target processing thread.
In the embodiment of the application, the scheduling module can allocate the return data according to the data object under the condition that the data object is not processed or the object corresponding to the thread does not exist, so that the return data is allocated to the corresponding core for receiving, and the normal processing of the serial data is ensured.
The following description is continued based on fig. 6.
S205: thread selection information is sent to the third thread.
In step S205, the scheduling module may send thread selection information to the third thread. The thread selection information comprises information of a target return thread, and the thread selection information is used for indicating a third thread to send return data to the target return thread.
The embodiment of the application is used for scheduling the threads needing to be balanced in load and the message communication among the upstream threads and the downstream threads by adding the scheduling module, the scheduling module can establish a plurality of thread entities based on the threads needing to be balanced in load, and determine which thread entity the upstream threads send the message to be processed to and which thread entity the downstream threads send the return message to, so that the message is distributed to the proper thread entity for data processing or data return according to the specific condition of the message to be processed, and the load balancing among the plurality of thread entities is realized.
The embodiment of the application also provides a thread data processing device, and fig. 8 is a block diagram of the thread data processing device according to an exemplary embodiment. As shown in fig. 8, the thread data processing apparatus 200 may include at least:
a determining module 201, configured to determine a target processing thread from a plurality of current second threads in a case where the first thread is run to obtain data to be processed of an original second thread; the target processing thread is a thread meeting the preset processing requirement; the plurality of current second threads are threads created based on the original second threads under the condition that the original second threads are overloaded; the processing capacity of each current second thread is consistent with that of the original second thread;
The sending module 202 is configured to send a target processing instruction to the first thread, where the target processing instruction instructs the first thread to send the data to be processed of the original second thread to the target processing thread, so that the target processing thread is executed to process the data to be processed.
Optionally, the determining module 201 is configured to: under the condition that the first thread is operated to obtain the data to be processed of the original second thread, before determining a target processing thread from a plurality of current second threads, under the condition that the original second thread is overloaded, creating a plurality of current second threads based on the original second thread; each current second thread and the original second thread have the same processing power.
Optionally, the determining module 201 is configured to: determining the thread load of each current second thread in the process of determining the target processing thread from a plurality of current second threads; and determining a target load meeting the first preset load requirement from the thread load of each current second thread, and determining the current second thread corresponding to the target load as a target processing thread.
Optionally, the determining module 201 is configured to: in the process of determining a target processing thread from a plurality of current second threads, determining a data object of data to be processed of an original second thread; based on the data object, a target processing thread is determined from the plurality of current second threads.
Optionally, the determining module 201 is configured to: in the process of determining a target processing thread from a plurality of current second threads based on the data object, determining the thread load of each current second thread under the condition that the data object meets the first preset object requirement; determining a target load meeting a second preset load requirement from the thread load of each current second thread, and determining the current second thread corresponding to the target load as a target processing thread; or alternatively; and under the condition that the data object meets the second preset object requirement, determining the thread which processes the data of the data object in the plurality of current second threads as a target processing thread.
Optionally, the scheduling module is located in the first thread, and the determining module 201 is configured to: receiving a thread selection request sent by a third thread; the thread selection request is generated after the data to be processed of the third thread is processed by the third thread to obtain return data; the data to be processed of the third thread is obtained by processing the data to be processed of the original second thread by the target processing thread; determining a target return thread from a plurality of current second threads; the target return thread is a thread meeting preset return requirements; sending thread selection information to a third thread; the thread selection information includes information of the target return thread, the thread selection information being used to instruct the third thread to send return data to the target return thread.
Optionally, the determining module 201 is configured to: determining the thread load of each current second thread in the process of determining a target return thread from the current second threads of the plurality of original second threads; determining a target load meeting a third preset load requirement from the thread load of each current second thread, and determining the current second thread of the original second thread corresponding to the target load as a target return thread; or alternatively; determining a thread which is processed by the data of the data object in the plurality of current second threads as a target return thread; wherein the thread selection request includes information of the data object that returned the information.
It should be noted that, the embodiment of the thread data processing device provided by the embodiment of the present application and the embodiment of the thread data processing method are based on the same inventive concept.
The embodiment of the application also provides an electronic device for processing thread data, which comprises a processor and a memory, wherein at least one instruction or at least one section of program is stored in the memory, and the at least one instruction or the at least one section of program is loaded and executed by the processor to realize the thread data processing method provided by any embodiment.
Embodiments of the present application also provide a computer readable storage medium that may be provided in a terminal to store at least one instruction or at least one program for implementing a thread data processing method in a method embodiment, where the at least one instruction or the at least one program is loaded and executed by a processor to implement the thread data processing method as provided in the method embodiment described above.
Alternatively, in the present description embodiment, the storage medium may be located in at least one network server among a plurality of network servers of the computer network. Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The memory of the present embodiments may be used for storing software programs and modules, and the processor executes the software programs and modules stored in the memory to perform various functional application programs and thread data processing. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs required for functions, and the like; the storage data area may store data created according to the use of the device, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory may also include a memory controller to provide access to the memory by the processor.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions, so that the computer device executes the thread data processing method provided by the above method embodiment.
The embodiment of the thread data processing method provided by the embodiment of the application can be executed in a terminal, a computer terminal, a server or similar computing devices. Taking the example of running on a server, fig. 9 is a block diagram of the hardware architecture of a server for a thread data processing method according to an exemplary embodiment. As shown in fig. 9, the server 300 may vary considerably in configuration or performance, and may include one or more central processing units (Central Processing Units, CPU) 33 (the central processing unit 33 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA), a memory 330 for storing data, one or more storage mediums 320 (e.g., one or more mass storage devices) for storing applications 323 or data 322. Wherein the memory 330 and the storage medium 320 may be transitory or persistent storage. The program stored in the storage medium 320 may include one or more modules, each of which may include a series of instruction operations on a server. Still further, the central processor 33 may be configured to communicate with the storage medium 320 and execute a series of instruction operations in the storage medium 320 on the server 300. The server 300 may also include one or more power supplies 360, one or more wired or wireless network interfaces 350, one or more input/output interfaces 340, and/or one or more operating systems 321, such as Windows ServerTM, mac OS XTM, unixTM, linuxTM, freeBSDTM, or the like.
The input-output interface 340 may be used to receive or transmit data via a network. The specific example of the network described above may include a wireless network provided by a communication provider of the server 300. In one example, the input-output interface 340 includes a network adapter (Network Interface Controller, NIC) that may connect to other network devices through a base station to communicate with the internet. In one example, the input/output interface 340 may be a Radio Frequency (RF) module for communicating with the internet wirelessly.
It will be appreciated by those skilled in the art that the configuration shown in fig. 9 is merely illustrative and is not intended to limit the configuration of the electronic device. For example, the server 300 may also include more or fewer components than shown in fig. 9, or have a different configuration than shown in fig. 9.
It should be noted that: the sequence of the embodiments of the present application is only for description, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the device and server embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and references to the parts of the description of the method embodiments are only required.
It will be appreciated by those of ordinary skill in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by a program to instruct related hardware, and the program may be stored in a computer readable storage medium, where the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing is only illustrative of the present application and is not to be construed as limiting thereof, but rather as various modifications, equivalent arrangements, improvements, etc., within the spirit and principles of the present application.

Claims (10)

1. A method of thread data processing, applied to a scheduling module, the method comprising:
under the condition that the first thread is operated to obtain the data to be processed of the original second thread, determining a target processing thread from a plurality of current second threads; the target processing thread is a thread meeting preset processing requirements; the plurality of current second threads are threads created based on the original second thread under the condition that the original second thread is overloaded; the processing capacity of each current second thread is consistent with that of the original second thread;
And sending a target processing instruction to the first thread, wherein the target processing instruction instructs the first thread to send the data to be processed of the original second thread to the target processing thread, so that the target processing thread is operated to process the data to be processed.
2. The method according to claim 1, wherein in a case where the first thread is executed to obtain data to be processed by the original second thread, before determining the target processing thread from the plurality of current second threads, the method further comprises:
creating a plurality of current second threads based on the original second threads in case the original second threads are overloaded; the processing capacity of each current second thread is consistent with that of the original second thread.
3. A method of thread data processing according to claim 1 or 2, wherein said determining a target processing thread from a plurality of current second threads comprises:
determining the thread load of each current second thread;
and determining a target load meeting a first preset load requirement from the thread load of each current second thread, and determining the current second thread corresponding to the target load as the target processing thread.
4. A method of thread data processing according to claim 1 or 2, wherein said determining a target processing thread from a plurality of current second threads comprises:
determining a data object of the data to be processed of the original second thread;
a target processing thread is determined from the plurality of current second threads based on the data object.
5. The thread data processing method of claim 4, wherein said determining a target processing thread from said plurality of current second threads based on said data object comprises:
determining the thread load of each current second thread under the condition that the data object meets the first preset object requirement;
determining a target load meeting a second preset load requirement from the thread load of each current second thread, and determining the current second thread corresponding to the target load as the target processing thread;
or alternatively;
and under the condition that the data object meets the second preset object requirement, determining the thread which is processed by the data object in the plurality of current second threads as the target processing thread.
6. The thread data processing method of claim 1, wherein the scheduling module is located in the first thread, the method further comprising:
Receiving a thread selection request sent by a third thread; the thread selection request is generated after the data to be processed of the third thread is processed by the third thread to obtain return data; the data to be processed of the third thread is obtained by processing the data to be processed of the original second thread by the target processing thread;
determining a target return thread from the plurality of current second threads; the target return thread is a thread meeting preset return requirements;
sending thread selection information to the third thread; the thread selection information includes information of the target return thread, the thread selection information being used to instruct the third thread to send the return data to the target return thread.
7. The method of claim 6, wherein said determining a target return thread from among the plurality of original second threads comprises:
determining the thread load of each current second thread;
determining a target load meeting a third preset load requirement from the thread load of each current second thread, and determining the current second thread of the original second thread corresponding to the target load as the target return thread;
Or alternatively;
determining a thread of the plurality of current second threads, which has processed the data of the data object, as the target return thread; wherein the thread selection request includes information of the data object of the return information.
8. A thread data processing apparatus for use with a scheduling module, the apparatus comprising:
the determining module is used for determining a target processing thread from a plurality of current second threads under the condition that the first thread is operated to obtain data to be processed of the original second thread; the target processing thread is a thread meeting preset processing requirements; the plurality of current second threads are threads created based on the original second thread under the condition that the original second thread is overloaded; the processing capacity of each current second thread is consistent with that of the original second thread;
and the sending module is used for sending a target processing instruction to the first thread, and the target processing instruction instructs the first thread to send the data to be processed of the original second thread to the target processing thread so that the target processing thread is operated to process the data to be processed.
9. An electronic device, characterized in that the device comprises a processor and a memory, in which at least one instruction or at least one program is stored, which at least one instruction or at least one program is loaded by the processor and which performs the thread data processing method according to any of claims 1-7.
10. A computer readable storage medium having stored therein at least one instruction or at least one program loaded and executed by a processor to implement the thread data processing method of any one of claims 1-7.
CN202310714913.9A 2023-06-15 2023-06-15 Thread data processing method, device, equipment and storage medium Pending CN116860436A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310714913.9A CN116860436A (en) 2023-06-15 2023-06-15 Thread data processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310714913.9A CN116860436A (en) 2023-06-15 2023-06-15 Thread data processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116860436A true CN116860436A (en) 2023-10-10

Family

ID=88229527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310714913.9A Pending CN116860436A (en) 2023-06-15 2023-06-15 Thread data processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116860436A (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1853165A (en) * 2003-09-30 2006-10-25 英特尔公司 Methods and apparatuses for compiler-creating helper threads for multi-threading
US20070180438A1 (en) * 2006-01-30 2007-08-02 Sony Computer Entertainment Inc. Stall prediction thread management
CN105022671A (en) * 2015-07-20 2015-11-04 中国科学院计算技术研究所 Load balancing method for parallel processing of stream data
CN110781016A (en) * 2019-10-30 2020-02-11 支付宝(杭州)信息技术有限公司 Data processing method, device, equipment and medium
CN112306646A (en) * 2020-06-29 2021-02-02 北京沃东天骏信息技术有限公司 Method, device, equipment and readable storage medium for processing transaction
CN112363834A (en) * 2020-11-10 2021-02-12 中国平安人寿保险股份有限公司 Task processing method, device, terminal and storage medium
CN112445615A (en) * 2020-11-12 2021-03-05 广州海鹚网络科技有限公司 Thread scheduling system, computer equipment and storage medium
CN112749013A (en) * 2021-01-19 2021-05-04 广州虎牙科技有限公司 Thread load detection method and device, electronic equipment and storage medium
CN112905326A (en) * 2021-02-18 2021-06-04 上海哔哩哔哩科技有限公司 Task processing method and device
CN113094172A (en) * 2021-04-01 2021-07-09 北京天融信网络安全技术有限公司 Server management method and device applied to distributed storage system
CN113342886A (en) * 2021-06-23 2021-09-03 杭州数梦工场科技有限公司 Data exchange method and device
CN113835866A (en) * 2021-10-09 2021-12-24 南方电网数字电网研究院有限公司 Multithreading task scheduling optimization method
CN114567519A (en) * 2022-02-28 2022-05-31 武汉世聪智能科技有限公司 Method and device for multithread parallel management of instruction messages of multiple intelligent devices
CN114611045A (en) * 2022-03-21 2022-06-10 平安普惠企业管理有限公司 Method and device for processing front-end interface request, computer equipment and storage medium
US20220206846A1 (en) * 2020-12-31 2022-06-30 Skyler Arron Windh Dynamic decomposition and thread allocation
CN115033369A (en) * 2022-07-05 2022-09-09 斑马网络技术有限公司 Thread scheduling method, device and equipment based on task processing
CN115701584A (en) * 2021-08-02 2023-02-10 浙江大华技术股份有限公司 Thread determining method and device, storage medium and electronic device

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1853165A (en) * 2003-09-30 2006-10-25 英特尔公司 Methods and apparatuses for compiler-creating helper threads for multi-threading
US20070180438A1 (en) * 2006-01-30 2007-08-02 Sony Computer Entertainment Inc. Stall prediction thread management
CN105022671A (en) * 2015-07-20 2015-11-04 中国科学院计算技术研究所 Load balancing method for parallel processing of stream data
CN110781016A (en) * 2019-10-30 2020-02-11 支付宝(杭州)信息技术有限公司 Data processing method, device, equipment and medium
CN112306646A (en) * 2020-06-29 2021-02-02 北京沃东天骏信息技术有限公司 Method, device, equipment and readable storage medium for processing transaction
CN112363834A (en) * 2020-11-10 2021-02-12 中国平安人寿保险股份有限公司 Task processing method, device, terminal and storage medium
CN112445615A (en) * 2020-11-12 2021-03-05 广州海鹚网络科技有限公司 Thread scheduling system, computer equipment and storage medium
US20220206846A1 (en) * 2020-12-31 2022-06-30 Skyler Arron Windh Dynamic decomposition and thread allocation
CN112749013A (en) * 2021-01-19 2021-05-04 广州虎牙科技有限公司 Thread load detection method and device, electronic equipment and storage medium
CN112905326A (en) * 2021-02-18 2021-06-04 上海哔哩哔哩科技有限公司 Task processing method and device
CN113094172A (en) * 2021-04-01 2021-07-09 北京天融信网络安全技术有限公司 Server management method and device applied to distributed storage system
CN113342886A (en) * 2021-06-23 2021-09-03 杭州数梦工场科技有限公司 Data exchange method and device
CN115701584A (en) * 2021-08-02 2023-02-10 浙江大华技术股份有限公司 Thread determining method and device, storage medium and electronic device
CN113835866A (en) * 2021-10-09 2021-12-24 南方电网数字电网研究院有限公司 Multithreading task scheduling optimization method
CN114567519A (en) * 2022-02-28 2022-05-31 武汉世聪智能科技有限公司 Method and device for multithread parallel management of instruction messages of multiple intelligent devices
CN114611045A (en) * 2022-03-21 2022-06-10 平安普惠企业管理有限公司 Method and device for processing front-end interface request, computer equipment and storage medium
CN115033369A (en) * 2022-07-05 2022-09-09 斑马网络技术有限公司 Thread scheduling method, device and equipment based on task processing

Similar Documents

Publication Publication Date Title
US9323580B2 (en) Optimized resource management for map/reduce computing
CN104395889A (en) Application enhancement using edge data center
CN109032803B (en) Data processing method and device and client
CN109191287B (en) Block chain intelligent contract fragmentation method and device and electronic equipment
CN113342477B (en) Container group deployment method, device, equipment and storage medium
CN115408100A (en) Container cluster scheduling method, device, equipment and storage medium
CN112035238A (en) Task scheduling processing method and device, cluster system and readable storage medium
CN107506284B (en) Log processing method and device
WO2021247172A1 (en) Context modeling of occupancy coding for point cloud coding
US20220353550A1 (en) Semi-decoupled partitioning for video coding
CN107092507A (en) Skin change method, the apparatus and system of application program
AU2021257883B2 (en) Context modeling of occupancy coding for pointcloud coding
CN112817428A (en) Task running method and device, mobile terminal and storage medium
CN116860436A (en) Thread data processing method, device, equipment and storage medium
CN115840649A (en) Method and device for allocating partitioned capacity block type virtual resources, storage medium and terminal
CN107454137B (en) Method, device and equipment for on-line business on-demand service
Guo et al. PARA: Performability‐aware resource allocation on the edges for cloud‐native services
CN116841720A (en) Resource allocation method, apparatus, computer device, storage medium and program product
CN113438678A (en) Method and device for distributing cloud resources for network slices
CN111381831B (en) Application deployment method and server
CN117707797B (en) Task scheduling method and device based on distributed cloud platform and related equipment
KR20160084215A (en) Method for dynamic processing application for cloud streaming service and apparatus for the same
CN110933122A (en) Method, apparatus, and computer storage medium for managing server
RU2777042C1 (en) Method for transmitting the subimage identifier
CN116991562B (en) Data processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination