CN113377509A - Data processing method and system - Google Patents

Data processing method and system Download PDF

Info

Publication number
CN113377509A
CN113377509A CN202110635503.6A CN202110635503A CN113377509A CN 113377509 A CN113377509 A CN 113377509A CN 202110635503 A CN202110635503 A CN 202110635503A CN 113377509 A CN113377509 A CN 113377509A
Authority
CN
China
Prior art keywords
circular queue
position information
index value
target
historical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110635503.6A
Other languages
Chinese (zh)
Inventor
罗小凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN202110635503.6A priority Critical patent/CN113377509A/en
Publication of CN113377509A publication Critical patent/CN113377509A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application discloses a data processing method, which is used in computer equipment, wherein the computer equipment is provided with a circular queue, and the circular queue is used for data transmission among a plurality of threads; the data processing method comprises the following steps: responding to an operation request of a target thread for the circular queue, and providing historical operation position information of the circular queue to the target thread; determining whether the target thread executes target operation on the circular queue according to the historical operation position information acquired by the target thread and the current operation position information of the circular queue, wherein the target operation comprises write operation or read operation; and if the target operation is executed on the circular queue by the target thread, updating the current operation position information and executing the target operation. In the embodiment of the application, locking and unlocking are not needed, performance overhead of locking and unlocking is avoided, and consumption of system resources is reduced.

Description

Data processing method and system
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a data processing method, a data processing system, computer equipment and a readable storage medium.
Background
Operating systems (e.g., android systems) typically support multi-process, multi-threaded execution. Shared resources, etc. may be accessed by multiple threads. In order to avoid data contention caused by simultaneous access of multiple threads to shared resources, which may result in unpredictable resource results, resource contention between producer threads and consumer threads may be avoided by locking resources. The producer thread is used for resource writing, and the consumer thread is used for resource reading. Whether it is a resource producer or a resource consumer, the resource needs to be locked before the corresponding operation is performed on the resource. The inventor has appreciated that currently, concurrent and secure access to shared resources is mainly achieved by mutual exclusion lock, spin lock, or read-write lock. The mutual exclusion lock is suitable for scenes with relatively long execution time of a critical section; the spin lock is suitable for a scene with relatively short execution time of a critical section; the read-write lock is divided into a read mode lock and a write mode lock, the read mode lock is used when the resource is read, and the write mode lock is used when the resource is written, so that the method is suitable for a scene with more reads and less writes.
However, when multi-threaded secure concurrent access is performed on shared resources, a coarser grained lock may reduce software performance and consume unnecessary system resources.
Disclosure of Invention
In view of the above, an object of the embodiments of the present application is to provide a data processing method, a system, a computer device and a computer readable storage medium, which are used to solve the following problems: when multithreading safe concurrent access is performed on shared resources, the coarsening of the lock can reduce the software performance and consume unnecessary system resources.
One aspect of the embodiments of the present application provides a data processing method, which is used in a computer device, where the computer device is configured with a circular queue, and the circular queue is used for data transfer among a plurality of threads; the data processing method comprises the following steps: responding to an operation request of a target thread for the circular queue, and providing historical operation position information of the circular queue to the target thread, wherein the historical operation position information is operation position information of the plurality of threads for the last operation of the circular queue, and the target thread is any one of the plurality of threads; determining whether the target thread executes target operation on the circular queue according to the historical operation position information acquired by the target thread and the current operation position information of the circular queue, wherein the target operation comprises write operation or read operation; and if the target operation is executed on the circular queue by the target thread, updating the current operation position information and executing the target operation.
Optionally, the determining, according to the historical operation position information obtained by the target thread and the current operation position information of the circular queue, whether the target thread performs a target operation on the circular queue includes: judging whether the historical operation position information acquired through the target thread is the same as the current operation position information; if the historical operation position information acquired through the target thread is the same as the current operation position information, determining that the target thread executes target operation on the circular queue; if the historical operation position information and the current operation position information acquired through the target thread are different, executing the following cycle operation until the historical operation position information and the current operation position information acquired through the target thread are the same: and responding to a re-operation request of the target thread, providing the latest historical operation position information of the circular queue to the target thread, and judging whether the latest historical operation position information acquired by the target thread is the same as the current operation position information.
Optionally, the method further includes: judging whether the number of the circulating operations is greater than a preset threshold value or not; and if the number of the circulating operation is greater than the preset threshold value, reducing the number of the threads of the multiple threads.
Optionally, the historical operation position information includes a plurality of historical index values, and the plurality of historical index values includes: a first historical index value representing a write header location of the circular queue in a last operation; a second historical index value representing a write tail position of the circular queue in a last operation; a third history index value representing a read head position of the circular queue in a last operation; a fourth historical index value representing a read tail position of the circular queue in a last operation; the first historical index value and the second historical index value are obtained by updating according to the last write operation; and the third historical index value and the fourth historical index value are obtained by updating according to the last reading operation.
Optionally, the target thread is a producer thread, and the producer thread is configured to execute the write operation to the circular queue; the method further comprises the following steps: obtaining a difference value between the first historical index value and the fourth historical index value; judging whether the difference value is smaller than the queue size of the circular queue or not; if the difference is not smaller than the size of the circular queue, judging that the target thread cannot perform writing operation on the circular queue; and if the difference value is smaller than the queue size of the circular queue, judging whether the first historical index value is the same as the first current index value of the circular queue or not so as to determine whether the target thread executes the write-in operation or not.
Optionally, if it is determined that the target thread executes the target operation on the circular queue, updating the current operation location information and executing the target operation includes: updating a first current index value in the circular queue; the updated first current index value is used for representing the write head position of the writable data in the circular queue; determining the initial position of the write operation according to the updated first current index value, and executing the write operation; if the data writing of the target thread is completed, updating a second current index value of the circular queue; the updated second current index value is used for representing the write tail position of the written data in the circular queue.
Optionally, the target thread is a consumer thread, and the consumer thread is configured to execute the read operation to the circular queue; the method further comprises the following steps: judging whether the second history index value and the third history index value are equal or not; if the second historical index value is equal to the third historical index value, determining that the target thread cannot perform reading operation on the circular queue; and if the second historical index value and the third historical index value are not equal, judging whether the third historical index value and a third current index value of the circular queue are the same or not so as to determine whether the target thread executes the reading operation or not.
Optionally, if it is determined that the target thread executes the target operation on the circular queue, updating the current operation location information and executing the target operation includes: updating a third current index value in the circular queue; wherein the updated third current index value is used to represent a read head position of the readable data in the circular queue; determining the initial position of the reading operation according to the updated third current index value, and executing the reading operation; if the data reading of the target thread is completed, updating a fourth current index value of the circular queue; the updated fourth current index value is used to indicate a read tail position of the read data in the circular queue.
Optionally, the method further includes an initialization operation of the circular queue: dynamically adjusting the queue size of the circular queue to enable the queue size to be 2N, wherein N is a positive integer; wherein, in the process of the size of the operation queue, temporary variables generated in the operation process are temporarily stored in the register.
Optionally, the method further includes: and providing the branch transfer information to a compiler so that the compiler can carry out code optimization according to the branch transfer information.
An aspect of the embodiments of the present application further provides a data processing system, configured in a computer device, where the computer device is configured with a circular queue, and the circular queue is used for data transfer among a plurality of threads; the data processing system includes: a response module, configured to provide, in response to an operation request of a target thread for the circular queue, historical operation position information of the circular queue to the target thread, where the historical operation position information is operation position information of the multiple threads for a last operation of the circular queue, and the target thread is any one of the multiple threads; the judging module is used for determining whether the target thread executes target operation on the circular queue according to the historical operation position information acquired by the target thread and the current operation position information of the circular queue, wherein the target operation comprises write operation or read operation; and the operation module is used for updating the current operation position information and executing the target operation if the target thread is determined to execute the target operation on the circular queue.
An aspect of the embodiments of the present application further provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor executes the computer program to implement the steps of the data processing method.
An aspect of the embodiments of the present application further provides a computer-readable storage medium, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor executes the computer program to implement the steps of the data processing method.
The data processing method, the data processing system, the computer device and the computer readable storage medium provided by the embodiment of the application have the following advantages: from the historical operation position information based on the circular queue and the current operation position information of the circular queue, each thread (such as a target thread) determines which thread wins the competition and whether the writing or the reading can be performed on the circular queue. In the embodiment of the application, when multithreading safe concurrent access is performed on shared resources, multithreading sharing of a circular queue for data synchronization can be realized based on the mechanism, and locking and unlocking are not needed, so that performance overhead of locking and unlocking is effectively avoided, unnecessary system resource consumption, such as system resource consumption of switching between a user mode and a kernel mode, switching between threads and the like, is reduced, and system efficiency is improved.
Drawings
FIG. 1 is a diagram schematically illustrating an application environment of a data processing method according to an embodiment of the present application;
fig. 2 schematically shows a flow chart of a data processing method according to a first embodiment of the present application;
fig. 3 schematically shows a diagram of the substeps of step S204 in fig. 2;
FIG. 4 is a schematic diagram illustrating another additional flowchart of a data processing method according to a first embodiment of the present application;
FIG. 5 is a schematic diagram illustrating another additional flowchart of a data processing method according to a first embodiment of the present application;
fig. 6 schematically shows a diagram of sub-steps of step S204 in fig. 2;
FIG. 7 is a schematic diagram illustrating another additional flowchart of a data processing method according to a first embodiment of the present application;
fig. 8 schematically shows a diagram of sub-steps of step S204 in fig. 2;
FIG. 9 is a flow chart schematically illustrating another addition of the data processing method according to the first embodiment of the present application;
FIG. 10 is a flow chart schematically illustrating another addition of the data processing method according to the first embodiment of the present application;
11(A) -11 (B) schematically show a specific operation flowchart of a data processing method according to a first embodiment of the present application;
FIG. 12 is a block diagram of a data processing system according to embodiment two of the present application; and
fig. 13 schematically shows a hardware architecture diagram of a computer device according to a third embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the descriptions relating to "first", "second", "third", etc. in this application are for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicit to the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present application.
The present application relates to the interpretation of terms:
atomicity: atomic semantics is a concurrent synchronization mechanism for computer science, where a read operation returns the result of the last previous write operation. The atomicity is guaranteed by a CPU hardware instruction (cmpxchg) and is used for realizing uninterrupted data exchange operation in multi-thread programming, so that the problem of data inconsistency caused by uncertainty of an execution sequence and unpredictability of interruption when a certain data is modified and rewritten by multi-thread simultaneously is avoided, and the overhead of context switching or CPU spin waiting and the like caused by locking when the certain data is modified by multi-thread simultaneously is avoided.
GCC: a C code compiler is rich in sync _ val _ compare _ and _ swap and other rich functional characteristics.
And (4) process: the minimum unit of resource management includes threads.
Thread: the minimum unit of program execution can be distributed with CPU time slices, attached to the process and provided with independent stack space.
Mutual exclusion locking: a lock for multi-thread synchronization realized by exclusive resource is suitable for a scene with long execution time of a critical section, and has the defect of overhead of two times of thread context switching.
Self-rotating lock: a lock that is multithreaded synchronously, implemented with busy-wait (the thread that acquires the lock will remain executing until released), is suitable for very short blocking scenarios.
A read-write lock: the read operation can be concurrent and repeated, the write operation is mutually exclusive, and the performance is higher under the scene of one-write-many read.
The producer thread: a thread for producing data.
The consumer thread: a thread for consuming data.
And (3) circulating a queue: that is, the sequential queue is connected end to end, and the table storing the queue elements is logically viewed as a ring.
Fig. 1 schematically shows an environment application diagram of a data processing method according to a first embodiment of the present application. Computer device 10000 can be configured to provide data processing services. Computer device 10000 can include any type of computing device that supports multithreading, such as a smartphone, tablet, laptop, server, etc. The computer device 10000 can run an iOS system, an Android system, a Windows system, a Linux system, and the like.
Example one
Fig. 2 schematically shows a flow chart of a data processing method according to a first embodiment of the present application. The present embodiment may be implemented in a computer device 10000, and the computer device 10000 is configured with a circular queue for data transfer between a plurality of threads. And the flow chart of the present embodiment is not used to limit the order of executing the steps.
As shown in fig. 2, the data processing method may include steps S200 to S204, in which:
step S200, in response to an operation request of a target thread for the circular queue, providing historical operation position information of the circular queue to the target thread, where the historical operation position information is operation position information of the plurality of threads for the last operation of the circular queue, and the target thread is any one of the plurality of threads.
When a plurality of threads simultaneously compete for the qualification of reading/writing to the circular queue, the plurality of threads can acquire the historical operation position information of the circular queue. Therefore, when the target thread also participates in the competition for obtaining read/write qualification, the target thread also needs to obtain the historical operation position information of the target thread from the circular queue. For example, after the target thread acquires the historical operation position information, the acquired historical operation position information may be used as a reference value, and the reference value may be used as a basis for determining whether the target thread can read/write currently.
Step S202, determining whether the target thread executes target operation on the circular queue according to the historical operation position information acquired by the target thread and the current operation position information of the circular queue, wherein the target operation comprises write operation or read operation.
As described above, when several threads simultaneously compete for the qualification of reading/writing to the circular queue, the several threads respectively obtain the historical operating position information of the circular queue, and the target thread is one of the several threads.
To ensure the fairness and the high efficiency of the resource allocation, the following operations are performed: and selecting the thread which acquires the historical operation position information earliest from the plurality of threads according to the time sequence of acquiring the historical operation position information, and further judging whether the thread which acquires the historical operation position information earliest can execute read/write operation. And other threads in the plurality of threads and other threads newly joining competition acquire the historical operation position information in the circular queue again, and select again based on the time sequence.
Whether the historical operation position information is acquired by the target thread first can be judged; if the historical operation position information is acquired by the target thread first, determining whether the target thread executes target operation on the circular queue according to the historical operation position information acquired by the target thread and the current operation position information of the circular queue, wherein the target operation comprises writing operation or reading operation; if the historical operation position information is not acquired by the target thread for the first time, executing the following cycle operation until the latest historical operation position information is acquired by the target thread for the first time: and responding to the re-operation request of the target thread, providing the latest historical operation position information of the circular queue to the target thread, and judging whether the latest historical operation position information is acquired by the target thread firstly. The method comprises the following specific steps:
and if the target thread is in the first competition and is not the first thread which takes the historical operation position information of the circular queue, entering the second competition, acquiring the historical operation position information from the circular queue again, judging whether the target thread is the first thread which takes the historical operation position information again, and if not, entering the third competition. Multiple contentions may be performed until the target thread wins.
It should be noted that the historical operation position information of the circular queue is a variable. In the second contention, the historical operation position information taken by the target thread from the circular queue may be different from or the same as the historical operation position information taken by the target thread from the circular queue in the first contention. The reason is as follows:
in the first competition of the target thread, the last operation is 127 th operation, and historical operation position information corresponding to the 127 th operation is acquired;
in the second competition of the target thread, the last operation is the 128 th operation, and historical operation position information corresponding to the 128 th operation is acquired;
in the third competition of the target thread, the last operation is still the 128 th operation, and historical operation position information corresponding to the 128 th operation is acquired;
in the fourth competition of the target thread and each subsequent competition of the target thread, the above contents can be referred to.
The target thread acquires the information first, and the historical operation position information is acquired by the target thread first after the operation is updated for a certain time.
Step S204, if the target operation is executed on the circular queue by the target thread, the current operation position information is updated and the target operation is executed.
The updated current operation position information may be used to prevent other threads of the plurality of threads from executing the operation on the circular queue during the target thread executing the target operation.
If the historical operating position information acquired by the target thread is the same as the current operating position information of the circular queue, it is indicated that the operating position information in the circular queue is not modified by other threads, that is, the circular queue is not occupied by other threads currently, so that the target thread can occupy and execute read/write operations.
If the historical operation position information acquired through the target thread is different from the current operation position information of the circular queue, it is indicated that the operation position information in the circular queue is currently modified by other threads. That is, the circular queue is currently being occupied by other threads, and therefore, the target thread may not perform read/write operations. In this case, the target thread needs to request the latest historical operation position information from the circular queue again to compete for the operation authority.
As an example, in order to increase the competition efficiency, as shown in fig. 3, the step S204 may perform the following steps: step S300, judging whether the historical operation position information acquired by the target thread is the same as the current operation position information; step S302, if the historical operation position information acquired by the target thread is the same as the current operation position information, determining that the target thread executes target operation on the circular queue; step S304, if the historical operating position information and the current operating position information acquired by the target thread are different, executing the following loop operation until the historical operating position information and the current operating position information acquired by the target thread are the same: and responding to a re-operation request of the target thread, providing the latest historical operation position information of the circular queue to the target thread, and judging whether the latest historical operation position information acquired by the target thread is the same as the current operation position information.
As an example, as shown in fig. 4, the data processing method further includes: step S400, judging whether the number of the circulating operation is greater than a preset threshold value; step S402, if the number of times of the loop operation is greater than the preset threshold, reducing the number of threads of the plurality of threads. In this embodiment, if the loop times are consumed too much, which indicates that the contention among threads is severe, the loop queue is currently in a busy state. Therefore, in the scenario of meeting the performance requirement, the number of threads of the multiple threads can be appropriately reduced, such as reducing the number of producer threads or reducing the number of consumer threads. For example, as many producer threads and consumer threads are used as default, the busy degree of the circular queue can be judged by judging the number of messages in the circular queue and/or the number of times of circular execution of each thread, and whether the number of the producer threads and the number of the consumer threads are reasonably configured, so that the number of the producer threads or the number of the consumer threads can be determined to be reduced according to the busy degree of the circular queue and the configuration of various threads, the reasonability of thread configuration is improved, and the data processing efficiency is optimized.
After the target thread obtains the occupation permission of the circular queue, the target thread needs to perform writing operation or reading operation on the circular queue. The updated current operation location information is location information of the current write operation or read operation of the target thread, for example: head position information indicating where the target thread started writing, tail position information indicating that data has been written when the target thread ended a write operation, and the like.
In the process that the target thread executes the target operation on the circular queue: since the updated current operation position information is the position information of the current write operation or read operation of the target thread, if other threads (such as thread Y) need to write or read the circular queue, it is necessary to compare whether the historical operation position information (i.e., the operation position information of the last operation of the current operation of the target thread) acquired by the thread Y is the same as the updated current operation position information, and determine whether the thread Y can write or read the circular queue currently according to the comparison result. Because the circular queue is occupied by the target thread currently, the current operation position information is updated inevitably, and the updated current operation position information is different from the historical operation position information acquired by the thread Y inevitably, it can be determined that the thread Y cannot perform any read operation writing on the circular queue, and the thread Y needs to compete again to acquire the permission of reading or writing the circular queue, so that the atomicity of the operation is ensured. That is, the updated current operation position information may prevent other threads from performing operations on the circular queue while the target thread performs the target operation. It should be noted that the write operation and the read operation in the embodiment of the present application are atomicity.
The data processing method provided by the embodiment determines which thread wins and can perform writing or reading on the circular queue according to each thread (e.g., target thread) from the historical operation position information based on the circular queue and the current operation position information of the circular queue. Therefore, multithreading sharing of the circular queue for data synchronization can be realized based on the mechanism, locking and unlocking are not needed, performance overhead of locking and unlocking, such as performance loss of switching between a user mode and a kernel mode, switching between threads and the like, is effectively avoided, and system efficiency is improved.
Other alternatives are provided below.
As an example, the historical operation location information includes a plurality of historical index values, the plurality of historical index values including:
a first historical index value representing a write header location of the circular queue in a last operation;
a second historical index value representing a write tail position of the circular queue in a last operation;
a third history index value representing a read head position of the circular queue in a last operation;
a fourth historical index value representing a read tail position of the circular queue in a last operation;
the first historical index value and the second historical index value are obtained by updating according to the last write operation; and the third historical index value and the fourth historical index value are obtained by updating according to the last reading operation.
In this embodiment, each historical index value is an integer, which may function to efficiently index the circular queue, rather than a memory address to avoid the ABA problem.
The target thread may be a producer thread or a consumer thread.
Firstly, the method comprises the following steps: the target thread is a producer thread, and the producer thread is used for executing the write operation to the circular queue.
As an example, it may be determined whether the circular queue is full. And if the circular queue is full, directly judging that the target thread cannot execute the write operation so as to improve the judgment efficiency. As shown in fig. 5, the data processing method may further include steps S500 to S506, in which: step S500, obtaining a difference value between the first historical index value and the fourth historical index value; step S502, judging whether the difference value is smaller than the size of the circular queue; step S504, if the difference is not smaller than the queue size of the circular queue, it is determined that the target thread cannot perform write operation on the circular queue; and S506, if the difference value is smaller than the queue size of the circular queue, judging whether the first historical index value is the same as the first current index value of the circular queue, so as to determine whether the target thread executes the write-in operation.
As an example, as shown in fig. 6, the step S204 may include: step S600, updating a first current index value in the circular queue; the updated first current index value is used for representing the write head position of the writable data in the circular queue; step S602, determining the initial position of the write operation according to the updated first current index value, and executing the write operation; and step S604, if the data writing of the target thread is completed, updating a second current index value of the circular queue; the updated second current index value is used for representing the write tail position of the written data in the circular queue. In this embodiment, the updated first current index value prevents other threads from preempting the update. Only one thread can update (write or read) at a time, thus ensuring the safety problem of multi-thread programming without the expense of locks. Thus, after the target thread has changed the write head position, other threads will stop updating.
Secondly, the method comprises the following steps: the target thread is a consumer thread for performing the read operation to the circular queue.
As an example, it may be determined whether the circular queue is empty. If the circular queue is empty, the circular queue has no data to read, and the target thread can not execute the reading operation directly, so that the judgment efficiency is improved. As shown in fig. 7, the data processing method may further include steps S700 to S704: step S700, judging whether the second history index value and the third history index value are equal; step S702, if the second history index value is equal to the third history index value, determining that the target thread cannot perform a read operation on the circular queue; and step S704, if the second history index value and the third history index value are not equal, determining whether the third history index value and a third current index value of the circular queue are the same, so as to determine whether the target thread executes the read operation.
As an example, as shown in fig. 8, the step S204 may include: step S800, updating a third current index value in the circular queue; wherein the updated third current index value is used to represent a read head position of the readable data in the circular queue; step S802, determining the initial position of the reading operation according to the updated third current index value, and executing the reading operation; step S804, if the data reading of the target thread is completed, updating the fourth current index value of the circular queue; the updated fourth current index value is used to indicate a read tail position of the read data in the circular queue. In this embodiment, the updated third current index value prevents other threads from preempting the update. Only one thread can update (write or read) at a time, thus ensuring the safety problem of multi-thread programming without the expense of locks. Thus, after the target thread has changed read head position, other threads will stop updating.
As an example, as shown in fig. 9, the data processing method may further include an initialization operation of the circular queue: dynamically adjusting the queue size of the circular queue to make the queue size 2NN is a positive integer; wherein, in the process of the size of the operation queue, temporary variables generated in the operation process are temporarily stored in the register. In this embodiment, the memory layout is optimized by dynamically adjusting the size of the circular queue, so as to improve the performance.
As an example, as shown in fig. 10, the data processing method may further include: step S1000, the branch transfer information is provided for a compiler, so that the compiler can carry out code optimization according to the branch transfer information. For example, GCC built-in command _ build _ expect () can be used to provide branch transfer information to a compiler, which facilitates the compiler to optimize codes, reduces performance degradation caused by instruction jump, and has high performance.
To make the present application more clear, as shown in fig. 11(a), the target thread is taken as an example to describe how it competes for write permission of the circular queue:
S1100A, starting LOOP, and judging whether the circular queue is full;
the meta-information of the circular queue includes the following data:
wr _ head represents the write head position of the circular queue in the last operation;
wr _ tail represents the write tail position of the circular queue in the last operation;
rd _ head represents the read head position of the circular queue in the last operation;
rd _ tail represents the read tail position of the circular queue in the last operation;
if the value of | wr _ head-rd _ tail | > is size, it is determined that the circular queue is full and data cannot be continuously written. Wherein | wr _ head-rd _ tail | represents the amount of data the circular queue has written, and size represents the queue size of the circular queue. And if rd _ head is wr _ tail, determining that the circular queue is empty currently and cannot continuously read data.
It should be noted that the circular queue is initialized in the memory, the memory for storing data is generated by the application program itself, and the application program ensures that the data in the memory can be legally taken out and the memory is released in the consumer thread. If the circular queue is full, the situation that the number of producer threads is too large and the number of consumer threads is not enough is indicated. The excessive production and the insufficient consumption lead to queue overstock, the slow data consumption brings certain time delay, and the method is usually to be avoided. If the circular queue is empty, the number of producer threads is too few, and the number of consumer threads is too many. That is, this situation is expected on the premise that the current producer thread meets the performance requirements.
It should be noted that the meta information of the circular queue is provided to one or more pending threads that need to compete for the circular queue, and a target thread that gets the meta information first is determined from the one or more pending threads.
S1102A, judging whether wr is the same as wr _ head acquired by the target thread. If so, the process proceeds to step S1104A, otherwise, the process returns to LOOP.
wr is the current write position of the circular queue.
If the two values are the same, it means that the loop queue is not modified by the target thread wr _ head in the period after being provided to the target thread wr _ head, that is, not occupied by other threads, and wr at this time is still wr _ head and is not changed.
S1104A, updating wr to a writable write header position in the circular queue (i.e., updating wr _ head), and sequentially writing the data to be written of the target thread into the circular queue with the updated wr as a start position.
S1106A, after the write is completed, the write tail position of the written data in the circular queue is updated (i.e. wr _ tail is updated).
As shown in fig. 11(B), taking the target thread as an example, how it competes for the read permission of the circular queue is described:
S1100B, open LOOP, and determine whether the circular queue is empty.
And S1102B, judging whether the current reading position rd of the circular queue is the same as rd _ head acquired by the target thread. If so, the process proceeds to step S1104B, otherwise, the process returns to LOOP.
S1104B, updating to a readable read header position in the circular queue.
S1106B, the read tail position of the read data in the circular queue is updated.
Example two
Fig. 12 schematically shows a block diagram of a data processing system according to the second embodiment of the present application, which may be partitioned into one or more program modules, which are stored in a storage medium and executed by one or more processors to implement the second embodiment of the present application. The program modules referred to in the embodiments of the present application refer to a series of computer program instruction segments capable of performing specific functions, and are more suitable for describing the execution process of the data processing system in the storage medium than the program itself. In an exemplary embodiment, the data processing system is used in a computer device configured with a circular queue for data transfer between multiple threads.
As shown in fig. 12, the data processing system 1200 may include a response module 1210, a determination module 1220, and an operation module 1230, wherein:
a response module 1210, configured to, in response to an operation request of a target thread for the circular queue, provide historical operation position information of the circular queue to the target thread, where the historical operation position information is operation position information of the multiple threads for a last operation of the circular queue, and the target thread is any one of the multiple threads;
a determining module 1220, configured to determine whether the target thread performs a target operation on the circular queue according to the historical operation position information obtained by the target thread and the current operation position information of the circular queue, where the target operation includes a write operation or a read operation; and
an operation module 1230, configured to update the current operation location information and execute the target operation if it is determined that the target thread executes the target operation on the circular queue.
Optionally, the determining module 1220 is further configured to:
judging whether the historical operation position information acquired through the target thread is the same as the current operation position information;
if the historical operation position information acquired through the target thread is the same as the current operation position information, determining that the target thread executes target operation on the circular queue;
if the historical operation position information and the current operation position information acquired through the target thread are different, executing the following cycle operation until the historical operation position information and the current operation position information acquired through the target thread are the same: and responding to a re-operation request of the target thread, providing the latest historical operation position information of the circular queue to the target thread, and judging whether the latest historical operation position information acquired by the target thread is the same as the current operation position information.
Optionally, the determining module 1220 is further configured to:
judging whether the number of the circulating operations is greater than a preset threshold value or not;
and if the number of the circulating operation is greater than the preset threshold value, reducing the number of the threads of the multiple threads.
Optionally, the historical operation position information includes a plurality of historical index values, and the plurality of historical index values includes:
a first historical index value representing a write header location of the circular queue in a last operation;
a second historical index value representing a write tail position of the circular queue in a last operation;
a third history index value representing a read head position of the circular queue in a last operation;
a fourth historical index value representing a read tail position of the circular queue in a last operation;
the first historical index value and the second historical index value are obtained by updating according to the last write operation; and the third historical index value and the fourth historical index value are obtained by updating according to the last reading operation.
Optionally, the target thread is a producer thread, and the producer thread is configured to execute the write operation to the circular queue; the system further comprises a determination module for:
obtaining a difference value between the first historical index value and the fourth historical index value;
judging whether the difference value is smaller than the queue size of the circular queue or not;
if the difference is not smaller than the size of the circular queue, judging that the target thread cannot perform writing operation on the circular queue; and
if the difference value is smaller than the queue size of the circular queue, whether the first historical index value is the same as the first current index value of the circular queue is judged, and whether the target thread executes the write-in operation is determined.
Optionally, the operation module 1230 is further configured to:
updating a first current index value in the circular queue; the updated first current index value is used for representing the write head position of the writable data in the circular queue;
determining the initial position of the write operation according to the updated first current index value, and executing the write operation; and
if the data writing of the target thread is completed, updating a second current index value of the circular queue; the updated second current index value is used for representing the write tail position of the written data in the circular queue.
Optionally, the target thread is a consumer thread, and the consumer thread is configured to execute the read operation to the circular queue; the system further comprises a determination module for:
judging whether the second history index value and the third history index value are equal or not;
if the second historical index value is equal to the third historical index value, determining that the target thread cannot perform reading operation on the circular queue; and
if the second history index value and the third history index value are not equal, determining whether the third history index value and a third current index value of the circular queue are the same, so as to determine whether the target thread executes the read operation.
Optionally, the operation module 1230 is further configured to:
updating a third current index value in the circular queue; wherein the updated third current index value is used to represent a read head position of the readable data in the circular queue;
determining the initial position of the reading operation according to the updated third current index value, and executing the reading operation; and
if the data reading of the target thread is completed, updating a fourth current index value of the circular queue; the updated fourth current index value is used to indicate a read tail position of the read data in the circular queue.
Optionally, the system further includes an initialization module, configured to initialize the circular queue:
dynamically adjusting the queue size of the circular queue to enable the queue size to be 2N, wherein N is a positive integer;
wherein, in the process of the size of the operation queue, temporary variables generated in the operation process are temporarily stored in the register.
Optionally, the system further includes a providing module, configured to:
and providing the branch transfer information to a compiler so that the compiler can carry out code optimization according to the branch transfer information.
EXAMPLE III
Fig. 13 schematically shows a hardware architecture diagram of a computer device suitable for implementing the data processing method according to a third embodiment of the present application. In this embodiment, the computer device 10000 is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction. Such as mobile devices, tablet devices, laptop computers, computing stations, smart devices (e.g., smart watches, smart glasses), virtual reality devices, gaming devices, set-top boxes, digital streaming devices, vehicle terminals, smart televisions, television boxes, MP4 (moving picture experts group audio layer IV) players, and server-based virtual terminal devices, among others. As shown in fig. 13, computer device 10000 includes at least, but is not limited to: the memory 10010, processor 10020, and network interface 10030 may be communicatively linked to each other via a system bus. Wherein:
the memory 10010 includes at least one type of computer-readable storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. In some embodiments, the storage 10010 may be an internal storage module of the computer device 10000, such as a hard disk or a memory of the computer device 10000. In other embodiments, the memory 10010 may also be an external storage device of the computer device 10000, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the computer device 10000. Of course, the memory 10010 may also include both internal and external memory modules of the computer device 10000. In this embodiment, the memory 10010 is generally used for storing an operating system installed in the computer device 10000 and various application software, such as program codes of a data processing method. In addition, the memory 10010 may also be used to temporarily store various types of data that have been output or are to be output.
Processor 10020, in some embodiments, can be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip. The processor 10020 is generally configured to control overall operations of the computer device 10000, such as performing control and processing related to data interaction or communication with the computer device 10000. In this embodiment, the processor 10020 is configured to execute program codes stored in the memory 10010 or process data.
Network interface 10030 may comprise a wireless network interface or a wired network interface, and network interface 10030 is generally used to establish a communication link between computer device 10000 and other computer devices. For example, the network interface 10030 is used to connect the computer device 10000 to an external terminal through a network, establish a data transmission channel and a communication link between the computer device 10000 and the external terminal, and the like. The network may be a wireless or wired network such as an Intranet (Intranet), the Internet (Internet), a Global System of Mobile communication (GSM), Wideband Code Division Multiple Access (WCDMA), a 4G network, a 5G network, Bluetooth (Bluetooth), or Wi-Fi.
It should be noted that fig. 13 only illustrates a computer device having the components 10010-10030, but it is to be understood that not all illustrated components are required and that more or less components may be implemented instead.
In this embodiment, the data processing method stored in the memory 10010 can be further divided into one or more program modules and executed by one or more processors (in this embodiment, the processor 10020) to complete the present application.
Example four
The present embodiment also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the data processing method in the embodiments.
In this embodiment, the computer-readable storage medium includes a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. In some embodiments, the computer readable storage medium may be an internal storage unit of the computer device, such as a hard disk or a memory of the computer device. In other embodiments, the computer readable storage medium may be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the computer device. Of course, the computer-readable storage medium may also include both internal and external storage devices of the computer device. In the present embodiment, the computer-readable storage medium is generally used for storing an operating system and various types of application software installed in the computer device, for example, the program codes of the data processing method in the embodiment, and the like. Further, the computer-readable storage medium may also be used to temporarily store various types of data that have been output or are to be output.
It will be apparent to those skilled in the art that the modules or steps of the embodiments of the present application described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different from that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (13)

1. A data processing method is used in computer equipment, and is characterized in that the computer equipment is provided with a circular queue which is used for data transmission among a plurality of threads; the data processing method comprises the following steps:
responding to an operation request of a target thread for the circular queue, and providing historical operation position information of the circular queue to the target thread, wherein the historical operation position information is operation position information of the plurality of threads for the last operation of the circular queue, and the target thread is any one of the plurality of threads;
determining whether the target thread executes target operation on the circular queue according to the historical operation position information acquired by the target thread and the current operation position information of the circular queue, wherein the target operation comprises write operation or read operation; and
and if the target operation is executed on the circular queue by the target thread, updating the current operation position information and executing the target operation.
2. The data processing method according to claim 1, wherein the determining, according to the historical operation position information acquired by the target thread and the current operation position information of the circular queue, whether the target thread performs the target operation on the circular queue includes:
judging whether the historical operation position information acquired through the target thread is the same as the current operation position information;
if the historical operation position information acquired through the target thread is the same as the current operation position information, determining that the target thread executes target operation on the circular queue;
if the historical operation position information and the current operation position information acquired through the target thread are different, executing the following cycle operation until the historical operation position information and the current operation position information acquired through the target thread are the same: and responding to a re-operation request of the target thread, providing the latest historical operation position information of the circular queue to the target thread, and judging whether the latest historical operation position information acquired by the target thread is the same as the current operation position information.
3. The data processing method of claim 2, further comprising:
judging whether the number of the circulating operations is greater than a preset threshold value or not;
and if the number of the circulating operation is greater than the preset threshold value, reducing the number of the threads of the multiple threads.
4. The data processing method of claim 1, wherein:
the historical operational location information includes a plurality of historical index values, the plurality of historical index values including:
a first historical index value representing a write header location of the circular queue in a last operation;
a second historical index value representing a write tail position of the circular queue in a last operation;
a third history index value representing a read head position of the circular queue in a last operation;
a fourth historical index value representing a read tail position of the circular queue in a last operation;
the first historical index value and the second historical index value are obtained by updating according to the last write operation; and the third historical index value and the fourth historical index value are obtained by updating according to the last reading operation.
5. The data processing method of claim 4, wherein the target thread is a producer thread, the producer thread being configured to perform the write operation to the circular queue; the method further comprises the following steps:
obtaining a difference value between the first historical index value and the fourth historical index value;
judging whether the difference value is smaller than the queue size of the circular queue or not;
if the difference is not smaller than the size of the circular queue, judging that the target thread cannot perform writing operation on the circular queue; and
if the difference value is smaller than the queue size of the circular queue, whether the first historical index value is the same as the first current index value of the circular queue is judged, and whether the target thread executes the write-in operation is determined.
6. The data processing method of claim 5, wherein if it is determined that the target operation is performed on the circular queue by the target thread, updating the current operation location information and performing the target operation comprises:
updating a first current index value in the circular queue; the updated first current index value is used for representing the write head position of the writable data in the circular queue;
determining the initial position of the write operation according to the updated first current index value, and executing the write operation; and
if the data writing of the target thread is completed, updating a second current index value of the circular queue; the updated second current index value is used for representing the write tail position of the written data in the circular queue.
7. The data processing method of claim 4, wherein the target thread is a consumer thread, the consumer thread being configured to perform the read operation to the circular queue; the method further comprises the following steps:
judging whether the second history index value and the third history index value are equal or not;
if the second historical index value is equal to the third historical index value, determining that the target thread cannot perform reading operation on the circular queue; and
if the second history index value and the third history index value are not equal, determining whether the third history index value and a third current index value of the circular queue are the same, so as to determine whether the target thread executes the read operation.
8. The data processing method of claim 7, wherein if it is determined that the target operation is performed on the circular queue by the target thread, updating the current operation location information and performing the target operation comprises:
updating a third current index value in the circular queue; wherein the updated third current index value is used to represent a read head position of the readable data in the circular queue;
determining the initial position of the reading operation according to the updated third current index value, and executing the reading operation; and
if the data reading of the target thread is completed, updating a fourth current index value of the circular queue; the updated fourth current index value is used to indicate a read tail position of the read data in the circular queue.
9. The data processing method according to any one of claims 1 to 8, further comprising an initialization operation of the circular queue:
dynamically adjusting the queue size of the circular queue to make the queue size 2NN is a positive integer;
wherein, in the process of the size of the operation queue, temporary variables generated in the operation process are temporarily stored in the register.
10. The data processing method according to any one of claims 1 to 8, further comprising:
and providing the branch transfer information to a compiler so that the compiler can carry out code optimization according to the branch transfer information.
11. A data processing system for use in a computer device, wherein the computer device is configured with a circular queue for data transfer between a plurality of threads; the data processing system includes:
a response module, configured to provide, in response to an operation request of a target thread for the circular queue, historical operation position information of the circular queue to the target thread, where the historical operation position information is operation position information of the multiple threads for a last operation of the circular queue, and the target thread is any one of the multiple threads;
the judging module is used for determining whether the target thread executes target operation on the circular queue according to the historical operation position information acquired by the target thread and the current operation position information of the circular queue, wherein the target operation comprises write operation or read operation; and
and the operation module is used for updating the current operation position information and executing the target operation if the target thread is determined to execute the target operation on the circular queue.
12. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 10 are implemented by the processor when executing the computer program.
13. A computer-readable storage medium, having stored thereon a computer program, the computer program being executable by at least one processor to cause the at least one processor to perform the steps of the method according to any one of claims 1 to 10.
CN202110635503.6A 2021-06-08 2021-06-08 Data processing method and system Pending CN113377509A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110635503.6A CN113377509A (en) 2021-06-08 2021-06-08 Data processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110635503.6A CN113377509A (en) 2021-06-08 2021-06-08 Data processing method and system

Publications (1)

Publication Number Publication Date
CN113377509A true CN113377509A (en) 2021-09-10

Family

ID=77576369

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110635503.6A Pending CN113377509A (en) 2021-06-08 2021-06-08 Data processing method and system

Country Status (1)

Country Link
CN (1) CN113377509A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114385352A (en) * 2021-12-17 2022-04-22 南京中科晶上通信技术有限公司 Satellite communication system, data caching method thereof and computer-readable storage medium
CN115113931A (en) * 2022-07-22 2022-09-27 瀚博半导体(上海)有限公司 Data processing system, method, artificial intelligence chip, electronic device and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9128749B1 (en) * 2013-06-27 2015-09-08 Emc Corporation Method and system for lock free statistics collection
CN105868031A (en) * 2016-03-24 2016-08-17 车智互联(北京)科技有限公司 A data transmission device and method
CN108363624A (en) * 2018-02-12 2018-08-03 聚好看科技股份有限公司 A kind of no locking wire journey orderly controls the method, apparatus and server of storage information
CN108664335A (en) * 2017-04-01 2018-10-16 北京忆芯科技有限公司 The method and apparatus of queue communication is carried out by agency
CN108710531A (en) * 2018-04-20 2018-10-26 深圳市文鼎创数据科技有限公司 Method for writing data, device, terminal device and the storage medium of round-robin queue
CN109271242A (en) * 2018-08-28 2019-01-25 百度在线网络技术(北京)有限公司 Data processing method, device, equipment and medium based on queue
CN110134439A (en) * 2019-03-30 2019-08-16 北京百卓网络技术有限公司 The method of method for constructing data structure and write-in data, reading data without lockization

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9128749B1 (en) * 2013-06-27 2015-09-08 Emc Corporation Method and system for lock free statistics collection
CN105868031A (en) * 2016-03-24 2016-08-17 车智互联(北京)科技有限公司 A data transmission device and method
CN108664335A (en) * 2017-04-01 2018-10-16 北京忆芯科技有限公司 The method and apparatus of queue communication is carried out by agency
CN111625376A (en) * 2017-04-01 2020-09-04 北京忆芯科技有限公司 Method and message system for queue communication through proxy
CN108363624A (en) * 2018-02-12 2018-08-03 聚好看科技股份有限公司 A kind of no locking wire journey orderly controls the method, apparatus and server of storage information
CN108710531A (en) * 2018-04-20 2018-10-26 深圳市文鼎创数据科技有限公司 Method for writing data, device, terminal device and the storage medium of round-robin queue
CN109271242A (en) * 2018-08-28 2019-01-25 百度在线网络技术(北京)有限公司 Data processing method, device, equipment and medium based on queue
CN110134439A (en) * 2019-03-30 2019-08-16 北京百卓网络技术有限公司 The method of method for constructing data structure and write-in data, reading data without lockization

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114385352A (en) * 2021-12-17 2022-04-22 南京中科晶上通信技术有限公司 Satellite communication system, data caching method thereof and computer-readable storage medium
CN115113931A (en) * 2022-07-22 2022-09-27 瀚博半导体(上海)有限公司 Data processing system, method, artificial intelligence chip, electronic device and medium

Similar Documents

Publication Publication Date Title
US8892803B2 (en) Interrupt on/off management apparatus and method for multi-core processor
TWI552076B (en) Systems and methods of using a hypervisor with guest operating systems and virtual processors
JPH04308961A (en) Means and apparatus for notifying state of synchronous locking of occupied process
CN113377509A (en) Data processing method and system
US20180191706A1 (en) Controlling access to a shared resource
CN107577523B (en) Task execution method and device
JP2000284978A (en) Interface system for asynchronously updating common resource and method for the same
CN107077390B (en) Task processing method and network card
US20060161924A1 (en) Scheduling method, in particular context scheduling method, and device to be used with a scheduling method
JPH07200323A (en) Method and system for control of ownership of released synchronous mechanism
CN102906706A (en) Information processing device and information processing method
CN107368367B (en) Resource allocation processing method and device and electronic equipment
US8595726B2 (en) Apparatus and method for parallel processing
CN112905365B (en) Data processing method, device, equipment and medium
CN114168271B (en) Task scheduling method, electronic device and storage medium
CN116225728B (en) Task execution method and device based on coroutine, storage medium and electronic equipment
CN112416556A (en) Data read-write priority balancing method, system, device and storage medium
CN113254223B (en) Resource allocation method and system after system restart and related components
US20180373573A1 (en) Lock manager
CN105808210A (en) Shared resource access method and apparatus
US20200097287A1 (en) Ticket Locks with Enhanced Waiting
CN112346879B (en) Process management method, device, computer equipment and storage medium
CN112882831A (en) Data processing method and device
US20090193220A1 (en) Memory management device applied to shared-memory multiprocessor
CN113419871B (en) Object processing method based on synchronous groove and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination