CN113835901A - Read lock operation method, write lock operation method and system - Google Patents

Read lock operation method, write lock operation method and system Download PDF

Info

Publication number
CN113835901A
CN113835901A CN202111082328.9A CN202111082328A CN113835901A CN 113835901 A CN113835901 A CN 113835901A CN 202111082328 A CN202111082328 A CN 202111082328A CN 113835901 A CN113835901 A CN 113835901A
Authority
CN
China
Prior art keywords
lock
read
core
write
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111082328.9A
Other languages
Chinese (zh)
Inventor
席华锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Oceanbase Technology Co Ltd
Original Assignee
Beijing Oceanbase Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Oceanbase Technology Co Ltd filed Critical Beijing Oceanbase Technology Co Ltd
Priority to CN202111082328.9A priority Critical patent/CN113835901A/en
Publication of CN113835901A publication Critical patent/CN113835901A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Techniques For Improving Reliability Of Storages (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The embodiment of the application discloses a read lock operation method, a write lock operation method and a system. The reading lock operation method comprises the following steps: setting a private reference count corresponding to each core; and in the process of reading the same data by the threads of different cores, performing reading lock adding and reading lock reading operations by using the private reference counts corresponding to the different cores. By using the embodiments of the application, when threads of different cores read the same data, the private reference counting operation corresponding to the core can be independently performed. The private reference counts corresponding to the different cores do not need to be synchronized among the cores, so the execution efficiency is improved. And the expansibility of the read lock is improved, namely, the time for adding and removing the read lock is hardly increased no matter how many threads of the cores add and read the lock simultaneously, thereby improving the execution efficiency.

Description

Read lock operation method, write lock operation method and system
Technical Field
Embodiments of the present disclosure relate to the field of information technologies, and in particular, to a read lock operation method, a write lock operation method, and a system.
Background
A thread (thread), also called a Lightweight Process (LWP), is a single sequence of control flows in a Process and serves as the smallest unit of program execution. In an operating system incorporating threads, a process is generally used as a basic unit for allocating resources, and a thread is used as a basic unit for independent operation and independent scheduling. Threads may be executed concurrently, e.g., multiple threads in a process may be executed concurrently. Threads in different processes can also execute concurrently. In particular, in a multi-compute core computer system, such as a computer system having multiple CPU cores, threads of different cores may also execute concurrently.
When multiple threads execute concurrently, the same data often needs to be accessed. From the accessed data, the data is shared to different threads. When multiple threads access the shared data, the integrity of the shared data needs to be guaranteed. For example, two threads cannot modify shared data at the same time; one thread cannot read the shared data that is modified in half. The classical approach is to use a Lock (Lock) mechanism. For example, a "read lock" is added to data during a read operation by a thread, and a "write lock" is added to data during a write operation by a thread. Before a process reads a datum, a read lock is added to the datum, and after the read operation is finished, the read lock is unlocked. Similarly, before a process performs a write operation on a piece of data, a write lock is applied to the data, and after the write operation is completed, the write lock is released. Read _ ref is typically used as the reference count for a read thread and the writer _ ID is used to represent the ID of a write thread.
For read operations that are performed on the same data by different threads, multiple read locks may be added. For example, if thread 1 is to perform a read operation on a data, before performing the read operation, the data is read after adding a read lock, specifically, adding 1 to the value of read _ ref (e.g., the data type of read _ ref is shaped and the initial value is 0). In the process of reading, thread 2 also performs a read operation on the same data, and adds 1 to the value of read _ ref and reads the data. The value of read _ ref at this time is 2. After the read operation for thread 1 is completed, the value of read _ ref is decremented by 1 and the read lock is unlocked. At this time, the value of read _ ref is 1. After that, thread 2 finishes the read operation on the data, subtracts 1 from the value of read _ ref, and unlocks the read lock. At this time, the value of read _ ref is 0. The read locks for the same data can be duplicated, so that the read locks are shared.
For write operations that are performed on the same data by different threads, a write lock can only be applied once. For example, if thread 1 is to perform a write operation on data, before performing the write operation, the data is locked by writing, specifically, the value of the writer _ ID is updated to the ID of thread 1 (for example, the data type of the writer _ ID is a shape and the initial value is 0; the ID of any thread is not 0), and then the data is written. During writing, the thread 2 also performs writing operation on the same data, but since the value of the writer _ ID is not 0 in this case, the thread 2 cannot add a write lock and cannot write the data. After the write operation of thread 1 is completed, the lock is unlocked, i.e., the value of the writer _ ID is updated to 0. Thread 2 knows that the value of writer _ ID is 0 at this time after the previous write lock failure and waiting for a period of time, and can add the write lock. Thereafter, the value of the writer _ ID is updated to the ID of the thread 2, and then the data is written. After the write operation of thread 2 is completed, the lock is unlocked, i.e., the value of the writer _ ID is updated to 0. As can be seen, write locks on the same data cannot be duplicated, and thus, there is mutual exclusivity between write locks.
In addition, the write lock and the read lock are mutually exclusive, namely, at any time, the write lock cannot be added again when the read lock is added to the same data, and the write lock cannot be added again when the write lock is added. Thus, before a thread reads data, it needs to check whether the writer _ ID value of the data is 0. If 0, the read operation can be performed; if not 0, it is necessary to wait for the writer _ ID value to become 0. Similarly, before writing a data, a thread needs to check whether the read _ ref value of the data is 0. If the value is 0, the write operation can be carried out; if not 0, it is necessary to wait for the read _ ref value to become 0. In fact, for the add-read lock, in order to further avoid the add-write lock operation performed by another thread for the write operation between the operation of checking the writer _ ID value and the corresponding read operation, i.e. to avoid the collision detection failure caused in this case, after adding 1 to the value of read _ ref, it will be checked again whether the writer _ ID value at this time is 0. If not 0, then a read operation is performed. Similarly, for the write lock, in order to further avoid the read lock operation performed by another thread for a read operation between the operation of checking the read _ ref value and the corresponding write operation, i.e. to avoid the failure of the collision detection caused in this case, after the value of the writer _ ID is updated to the ID of the write thread, it is checked again whether the read _ ref value at this time is 0. If not 0, then the write operation is performed.
The thread changes the values of read _ ref and writer _ ID, belonging to atomic operations. Atomic operations are typically instructions provided by the CPU with atomicity. When one thread executes an atomic operation, the thread cannot be interrupted by other threads and cannot be switched to other threads. In other words, such an atomic operation, once started, runs until the operation ends.
In the process of implementing the present application, the inventor finds that at least the following problems exist in the prior art:
in a multi-compute core computer system, threads of different cores may perform read and write operations on the same data. In particular, it is often the case that a large number of read operations are performed on the same data over a period of time, without a write operation. Each core typically corresponds to a cache. Each core maintains a read _ ref value in its corresponding cache. Furthermore, according to the prior art implementation, the read _ ref value in the cache corresponding to each core needs to be kept consistent. Thus, for a multi-compute core computer system, once a read _ ref value in a cache memory corresponding to one core changes, it communicates with other cores to notify of the change. And after receiving the notification, other cores update the read _ ref value in the corresponding caches of the other cores.
Thus, in the prior art, when a plurality of threads of different cores read the same data, since communication between the cores takes a certain time, the atomic operation for changing the read _ ref value in the cache corresponding to each core takes a certain time, and thus the execution efficiency is low.
Disclosure of Invention
An object of the embodiments of the present application is to provide a read lock operation method, a write lock operation method and a system, so as to improve execution efficiency.
To solve the foregoing technical problem, an embodiment of the present application provides a read lock operation method, a write lock operation method, and a system, which are implemented as follows:
a method of read lock operation comprising:
setting a private reference count corresponding to each core;
and in the process of reading the same data by the threads of different cores, performing reading lock adding and reading lock reading operations by using the private reference counts corresponding to the different cores.
A read lock operating system comprises a data unit, a cache unit, a second cache unit, a first computational core and a second computational core, wherein,
a data unit for storing data;
a first cache unit to store an assigned first private reference count for a first compute core;
a second cache unit to store a second private reference count assigned for a second compute core;
the first computing core and the second computing core are used for reading the same data in the data unit; and the number of the first and second electrodes,
in the process of reading the data by the thread of the first computing core, performing reading lock adding and reading lock reading operations by the private reference count corresponding to the first core;
and in the process of reading the data by the thread of the second computing core, performing reading lock adding and reading lock reading operations by using the private reference count corresponding to the second core.
A write lock operation method, comprising:
before the data is written, judging whether a read operation process for the data exists in all the computing cores;
before the data is written, judging whether the data is in another writing operation process;
and if the two judgment results are negative, performing write lock adding and write lock releasing operations by using the global write lock in the process of performing write operation on the data.
A write lock operation system comprises a data unit, a first judgment unit, a second judgment unit, and an add/unlock unit, wherein,
a data unit for storing data;
the first judgment unit is used for judging whether a read operation process of the data exists in all the computing cores before the write operation of the data is executed;
the second judging unit is used for judging whether the data is in the process of another write operation before the write operation is executed on the data;
and the write lock adding and unlocking unit is used for performing write lock adding and write lock unlocking operations by using the global write lock in the process of performing write operation on the data under the condition that the judgment results of the first judgment unit and the second judgment unit are both negative.
According to the technical scheme provided by the embodiment of the application, when threads of different cores read the same data, the embodiment of the application independently performs private reference counting operation corresponding to the core. The private reference counts corresponding to the different cores do not need to be synchronized among the cores, so the execution efficiency is improved. And the expansibility of the read lock is improved, namely, the time for adding and removing the read lock is hardly increased no matter how many threads of the cores add and read the lock simultaneously, thereby improving the execution efficiency.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
FIG. 1 is a flow chart of one embodiment of a method for read lock operation of the present application;
FIG. 2 is a block diagram of one embodiment of a read lock operating system of the present application;
FIG. 3 is a flow chart of one embodiment of a write lock operation method of the present application;
FIG. 4 is a block diagram of one embodiment of a write lock operating system of the present application.
Detailed Description
The embodiment of the application provides a read lock operation method, a write lock operation method and a system.
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
An embodiment of a method of operating a read lock of the present application is first described.
FIG. 1 is a flow chart illustrating one embodiment of a method for read lock operation of the present application. As shown in fig. 1, the method of this embodiment includes:
s100: the private reference count corresponding to each core is set.
Modern CPUs employ a number of techniques to counteract the latency associated with memory accesses. During the period of reading and writing the memory data, the CPU can execute hundreds of instructions. A multi-level Static Random Access Memory (SRAM) cache (hereinafter, referred to as a cache) is a main means for reducing the influence caused by such delay.
For example, for a dual core computer system, core1 and core2 have corresponding cache 1 and cache 2, respectively. The cache may be a cache of a compute core. For example, for a CPU, the CPU often has a first level cache, a second level cache, and even some CPUs have a third level cache. For a CPU including a first-level cache and a second-level cache, data to be operated by the CPU is read into the second-level cache from a memory, then read into the first-level cache from the second-level cache, and then read into the CPU from the first-level cache for execution. Generally, the closer to the memory of the CPU, the faster the speed, but the higher the cost; the further away from the memory of the CPU, the slower the speed, but the cheaper the cost. Data frequently read and written by the CPU is generally stored in a memory close to the CPU, so that the utilization rate of the memory with high manufacturing cost is improved.
In this step, preferably, the private reference count (private _ read _ ref) may be placed in the cache. For example, the private reference count may be set in the CPU's level one cache. Of course, depending on the architecture of the CPU and the capacity of the memories at different levels, the private reference count may also be set in the second level cache, or in other memories with read speeds on the same order as the atomic operating speed of the CPU. This is not a particular limitation in the embodiments of the present application. In fact, caches are often transparent to the program, i.e., the program has no way to control whether a variable is to be placed in the cache, and in which level of cache it is to be placed. When a program needs to operate a variable, the CPU can check whether the variable is in the primary buffer, and if the variable is in the primary buffer, the CPU can directly read the variable from the primary buffer; if not, it may be checked whether in the level two cache: if yes, the variable is loaded into the first level cache from the second level cache, and if the second level cache does not exist, the variable is loaded into the second level cache and the first level cache from the memory.
In the prior art, read operations of different threads on the same data involve the same reference count, i.e. the same reference count operation. This reference count, read _ ref, is referred to as global read _ ref according to general computer domain rules. In particular, different threads of the same compute kernel, including different threads in the same process, or different threads in different processes, perform a self-add (++) or self-subtract (- -) operation on the same global read _ ref when reading the same data. If in a multi-core computer system, for the case of multiple cores, only one global reference count is still used, problems arise as analyzed in the background.
In this step, for different cores, a private application count is set for each core. For example, for core1, its corresponding private reference count is set, e.g., as read _ ref _ core 1; for core2, its corresponding private reference count is also set, e.g., as read _ ref _ core 2. For the case where other cores are also included, and so on.
The corresponding private reference count for each core may not be permanently (or referred to as fixed) assigned, but may be temporarily assigned. For example, the allocation may be made before the thread of each core first locks the data; the private reference count is retired after a read operation of the data by a thread of the core is completed. Specifically, an array of private reference counts [ read _ ref ] may be set. Before the thread of each core first locks the data, it applies to allocate one of the [ read _ ref ] arrays. The space of the array [ read _ ref ] can be set large enough. Each entry in the array may be set to shaping (int). The initial value of each entry in the array may be initialized to 0. Of course, for a read operation of a certain data, each entry in the [ read _ ref ] array may also be fixedly allocated to each core.
Preferably, in actual operation, each entry in the [ read _ ref ] array may be allocated to one cache line in the cache. The cache line is the minimum unit for the multi-core CPU to maintain cache consistency and is also the actual unit of memory exchange. In practice, one cache line on most platforms is larger than 8 bytes, and most cache lines are 64 bytes. If the [ read _ ref ] array is defined to be int type, then 8 bytes. As can be seen, one cache line may store 8 read _ ref. If more than one read _ ref is stored in a cache line, there will be conflicts when operating on different elements in the array. To avoid conflicts, each read _ ref in the [ read _ ref ] array may be stored in one cache line. For example, each entry in the [ read _ ref ] array may be declared as a structure, with a structure size of 64 bytes. Thus, each entry in the [ read _ ref ] array is exclusive to one cache line, thereby avoiding conflicts during operation.
S110: and in the process of reading the same data by the threads of different cores, performing reading lock adding and reading lock reading operations by using the private reference counts corresponding to the different cores.
For example, the same computer system includes 2 computing cores, core1 and core 2. As another example, core1 and core2 both read the same data. According to S100, core1 may apply for 1 private reference count, labeled read _ ref _ core 1; similarly, core2 may also apply for 1 private reference count each, such as read _ ref _ core 2.
In this way, in the process of reading the data by the thread of the core1, the read lock is first added. That is, the private reference count read _ ref _ core1 of core1 performs an add 1 operation. Thus, the read _ ref _ core1 changes from the initial value 0 to 1. The thread of core1 then reads the data. After the read operation is completed, the reading lock operation is performed. That is, the private reference count read _ ref _ core1 of core1 performs a subtract 1 operation. Thus, read _ ref _ core1 changes from 1 to 0.
Similarly, in the process of reading the data by the thread of the core2, the read lock is firstly added. That is, the private reference count read _ ref _ core2 of core2 performs an add 1 operation. Thus, the read _ ref _ core2 changes from the initial value 0 to 1. The thread of core2 then reads the data. After the read operation is completed, the reading lock operation is performed. That is, the private reference count read _ ref _ core2 of core2 performs a subtract 1 operation. Thus, read _ ref _ core2 changes from 1 to 0.
By adopting the above manner in the embodiment of the application, when threads of different cores read the same data, the private reference counting operation corresponding to the core is performed independently. The private reference counts corresponding to the different cores do not need to be synchronized among the cores, so the execution efficiency is improved. Moreover, the expansibility of the read lock is improved, namely, the time for adding and removing the read lock is hardly increased no matter how many threads of the core add and decode the lock simultaneously.
In addition, the private reference counts corresponding to different cores do not need to be synchronized among the cores, and a communication process among the cores is omitted, so that expenses of bandwidth, time and the like required by inter-core communication are omitted.
The S110 may specifically include the following steps:
s111: and in the process of reading the data by the thread of the first core, performing reading lock adding and reading lock reading operations by using the private reference count corresponding to the first core.
S112: and in the process of reading the data by the thread of the second core, performing reading lock adding and reading lock reading operations by using the private reference count corresponding to the second core.
The operation of locking with a read lock by using the private reference count specifically includes: the processes of different cores perform an add-1 operation with the private reference count corresponding to each core. The operation of reading the lock with the private reference count specifically includes: and the processes of different cores execute the minus 1 operation according to the private reference count corresponding to each core. Between the read lock and read lock operations, the data may be read by the processes of each core.
It should be noted that, in the process of performing a read operation on the same data by a plurality of different threads of the same core, the read lock adding operation and the read lock reading operation may be performed by the same private counter. For example, in the process of reading the data by thread 1 of core1, a read lock is first added. The private reference count read _ ref _ core1 of core1 performs an add 1 operation. Thus, the read _ ref _ core1 changes from the initial value 0 to 1. Thread 1 of core1 then reads the data. In the process of reading the data by thread 1 of core1, thread 2 of core1 also performs a read operation on the same data, and adds 1 to the value of read _ ref _ core1 and reads the data. The value of read _ ref at this time is 2. After the read operation for thread 1 of core1 is completed, the read _ ref _ core1 is decremented by 1 to unlock the lock. At this time, the value of read _ ref _ core1 becomes 1. After that, thread 2 of core1 finishes executing the read operation on the data, and subtracts 1 from the value of read _ ref _ core1 to unlock the read lock. At this time, the value of read _ ref _ core1 becomes 0. Thus, for the same core, no matter how many threads add and decode the lock simultaneously, the time for adding and decoding the lock is hardly increased.
It should be further noted that, in order to avoid data inconsistency, the read lock in the embodiment of the present application is still mutually exclusive from the write lock. For example, in a computer system with multiple cores, a global write lock is set, such as global _ write _ id. If a thread is to write to data, a write lock is placed on the data before the write is performed. For example, thread 1 of a core updates the value of global _ writer _ ID to the ID of thread 1 (e.g., the data type of global _ writer _ ID is reshaped and the initial value is 0; the ID of any thread is not 0), and then writes the data. During a write, a thread of a core (which may be the same core or a different core from the previous write lock thread), referred to herein as thread 2, performs a read operation on the same data, applying for a private reference count corresponding to the core. The private reference count is initialized to 0, for example. However, since the value of global _ writer _ id is not 0 at this time, thread 2 cannot add a read lock and cannot read the data. After the write operation of thread 1 is completed, the lock is unlocked, i.e. the value of global _ writer _ id is updated to 0. After thread 2 fails to add the read lock for the previous time and waits for a period of time, it knows that the value of global _ writer _ id is 0 at this time, and can add the read lock. Thread 2 may also try to add a read lock at regular intervals after the previous read lock addition fails; when the value of global _ writer _ id is 0, retry read lock is successful. Thus, thread 2 increments the value of the private reference count to which it applies by 1, and then reads the data. The private reference count for thread 2 at this point has a value of 1. After the read operation of thread 2 is completed, the lock is interpreted, i.e., the value of the corresponding private reference count is decremented by 1 to 0.
Based on this, before the thread of the different core in S110 performs the read lock operation on the corresponding private reference count, the method may further include:
s101: the thread of the different core checks whether the data is in the process of writing operation, and if the check result is no, the execution is triggered to S110.
Whether a write operation is in progress may be accomplished by checking the status of the global write lock. For example, it may be checked whether the global write lock is 0, and S110 is performed when the check result is 0.
Conversely, if the value of the check global write lock is not 0, it means that there is currently a write operation to the data. Based on the mutually exclusive rows of the write lock and the read lock, the read lock cannot be added to the data, and the data cannot be read. In this case, S110 needs to be executed after waiting for the global write lock to become 0.
S101 may be executed after S100 or before S100.
It should be noted that, for the add-read lock, in order to further avoid an add-write lock operation performed by another thread for a write operation between the operation of checking the global _ writer _ id value and the corresponding read operation, that is, to avoid failure of collision detection caused in this case, after adding 1 to the value of the private reference count corresponding to the core, it will be checked again whether the current global _ writer _ id value is 0. If not 0, then a read operation is performed.
One embodiment of the read lock operating system of the present application is described below. Fig. 2 shows a block diagram of an embodiment of the system.
As shown in fig. 2, the read lock operating system in an embodiment of the present application includes a first computing core 11a, a second computing core 11b, a first cache unit 12a, a second cache unit 12b, and a data unit 13, where each of the computing cores corresponds to a unique cache unit.
Wherein:
a data unit 13 for storing data;
a first cache unit 12a, configured to store a first private reference count allocated for the first computing core;
a second cache unit 12b for saving the allocated second private reference count for the second computational core;
a first computing core 11a and a second computing core 11b for reading the same data in the data unit; and the number of the first and second electrodes,
in the process of reading the data by the thread of the first computing core 11a, performing reading lock adding and reading lock reading operations by using the private reference count corresponding to the first core;
and in the process of reading the data by the thread of the second computing core 11b, performing read lock adding and read lock reading operations by using the private reference count corresponding to the second core.
Wherein:
the first cache unit 12a may be a cache of a first computing core;
the second cache unit 12b may be a cache of a second computing core.
In the foregoing method embodiment, the private reference count corresponding to each core is set, and the private reference count corresponding to each core may be allocated before the thread of each core performs the first read lock on the data, or the private reference count of each core may also be fixedly allocated. For example, an array of private reference counts [ read _ ref ] may be set. Before the thread of each core first locks the data, it applies to allocate one of the [ read _ ref ] arrays. The space of the array [ read _ ref ] can be set large enough. Each entry in the array may be set to shaping (int). The initial value of each entry in the array may be initialized to 0. Of course, for a read operation of a certain data, each entry in the [ read _ ref ] array may also be fixedly allocated to each core. Preferably, in actual operation, each entry in the [ read _ ref ] array may be allocated to one cache line in the cache. The cache line is the minimum unit for the multi-core CPU to maintain cache consistency and is also the actual unit of memory exchange. In practice, one cache line on most platforms is larger than 8 bytes, and most cache lines are 64 bytes. If the [ read _ ref ] array is defined to be int type, then 8 bytes. As can be seen, one cache line may store 8 read _ ref. If more than one read _ ref is stored in a cache line, there will be conflicts when operating on different elements in the array. To avoid conflicts, each read _ ref in the [ read _ ref ] array may be stored in one cache line. For example, each entry in the [ read _ ref ] array may be declared as a structure, with a structure size of 64 bytes. Thus, each entry in the [ read _ ref ] array is exclusive to one cache line, thereby avoiding conflicts during operation.
In combination with the above, in an embodiment of the read lock operating system of the present application, caches of different cores may correspond to different cache lines. For example, the first cache unit corresponds to a first cache line, and the second cache unit corresponds to a second cache line.
In the embodiment of the read lock operating system, the read lock operating system may further include a checking unit 14, configured to check whether the data is in a write operation process, and if not, trigger each computing core to perform read lock adding and read lock reading operations on the corresponding private reference count.
The operation of locking the private reference count includes: the process of each core performs an add-1 operation on the private reference count corresponding to that core. The operation of reading the lock on the private reference count specifically includes: and the process of each core performs 1 subtracting operation on the private reference count corresponding to the core. Between the read lock and read lock operations, the data may be read by the processes of each core.
One embodiment of a write lock operation method of the present application is described below. Fig. 3 shows a flow chart of an embodiment of the method. As shown in fig. 3, an embodiment of a write lock operation method of the present application includes:
s300: before the data is written, whether a read operation process for the data exists in all the computing cores is judged.
The determining whether all the computing cores have the reading process for the data may be specifically implemented by facilitating whether the private reference count of each computing core corresponding to the data is 0. If the value is 0, the data is in the process of reading operation; if not 0, this indicates that the data is not in the process of a read operation.
S310: before the data is written, whether the data is in another writing operation process is judged.
S310 may be specifically implemented by determining whether the global write lock for the data is 0. If 0, it indicates that the data is not in the process of another write operation; if not 0, another write operation procedure is indicated.
S320: and if the judgment results of the S310 and the S320 are both negative, performing the operations of adding a write lock and removing the write lock by using the global write lock in the process of performing the write operation on the data.
Specifically, before a write operation is performed, a write lock is applied to the data; after the write operation, the lock is unlocked for the data.
The global variable in S320 is, for example, global _ writer _ id. The write lock can update the value of global _ writer _ ID to the ID of the write thread; the value of global _ writer _ id may be updated to 0 to unlock the write lock.
Similarly, for the write lock, in order to further avoid the read lock operation performed by another thread for the read operation between the operation of checking each core private reference count value and the corresponding write operation, i.e. to avoid the failure of the collision detection caused in this case, after the value of global _ writer _ ID is updated to the ID of the write thread, it will be checked again whether each core private reference count value at this time is 0. If not 0, then the write operation is performed.
The write lock operation method may be based on the read lock operation method or the read lock operation system.
One embodiment of the write lock operating system of the present application is described below. Fig. 4 shows a block diagram of an embodiment of the system. As shown in FIG. 4, the embodiment of the write lock operating system of the present application includes:
a data unit 3 for storing data;
a first judgment unit 21a configured to judge whether there is a read operation process on data in all the computing cores before performing a write operation on the data;
a second judging unit 21b configured to judge whether data is in another write operation process before performing a write operation on the data;
and an add/write lock/unlock unit 22, configured to, when the determination results of the first determination unit and the second determination unit are both negative, perform write lock/unlock operations with the global write lock in a process of performing write operations on the data.
Specifically, before a write operation is performed, a write lock is applied to the data; after the write operation, the lock is unlocked for the data.
The global variable is, for example, global _ writer _ id. The write lock can update the value of global _ writer _ ID to the ID of the write thread; the value of global _ writer _ id may be updated to 0 to unlock the write lock.
The write lock operating system may be based on the read lock operating method or the read lock operating system.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The application is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
While the present application has been described with examples, those of ordinary skill in the art will appreciate that there are numerous variations and permutations of the present application without departing from the spirit of the application, and it is intended that the appended claims encompass such variations and permutations without departing from the spirit of the application.

Claims (16)

1. A read lock operation method is applied to a multi-core computer system, and comprises the following steps:
setting different private quote counts corresponding to different cores respectively;
in the process that the threads of different cores read the same data, the threads of each core perform read lock and read lock operations aiming at the same data based on the corresponding private reference counts.
2. The method for operating a read lock according to claim 1, wherein the setting of the different private reference counts respectively corresponding to the different cores comprises:
distributing the private reference count corresponding to each core before the thread of each core adds a read lock to the same data for the first time, and recovering the private reference count after the thread of each core finishes the execution of the read operation of the same data; or the like, or, alternatively,
the private reference count for each core is fixedly assigned.
3. The method for operating a read lock according to claim 1 or 2, wherein the setting of the different private reference counts respectively corresponding to the different cores comprises:
an array of private reference counts is set, and different entries in the array are assigned to different cores.
4. The method of claim 3, wherein assigning different entries in the array to different cores comprises:
different entries in the array are allocated to different cache lines in the cache, the different cache lines corresponding to different cores.
5. The method for operating a read lock according to claim 1, wherein during a process of reading the same data by threads of different cores, each core performs read lock and read lock reading operations on the same data based on the corresponding private reference count, and the method comprises:
in the process that the thread of the first core reads the same data, the private reference count corresponding to the first core is used for performing reading locking and reading locking operations;
and in the process of reading the same data by the thread of the second core, performing reading lock adding and reading lock reading operations by using the private reference count corresponding to the second core.
6. The method of claim 1, wherein the thread of each core performs a read lock operation on the same data based on the corresponding private reference count, and the method comprises:
the thread of each core executes an plus 1 operation aiming at the corresponding private reference count;
the thread of each core performs an add-read-lock operation on the same data based on the corresponding private reference count, and the add-read-lock operation includes:
each core's thread performs a minus 1 operation on its corresponding private reference count.
7. The method for operating a read lock according to claim 1, wherein before the adding the read lock, the private reference counts corresponding to the different cores further comprise:
the threads of different cores check whether the same data is in the process of writing operation, and the check result is no.
8. The method of claim 6, wherein after the thread of each core performs the add-1 operation with respect to its corresponding private reference count and before performing the read operation, the method further comprises:
the thread of each core checks whether the same data is in the process of writing operation;
the thread of each core performs read operations, including:
and if the thread of each core determines that the check result represents that the same data is not in the process of writing operation, executing reading operation.
9. A read lock operating system comprises a data unit, a first cache unit, a second cache unit, a first core and a second core, wherein the first core and the second core are different cores in a computer system,
a data unit for storing data;
a first cache unit to store a first private reference count allocated for a first core;
a second cache unit to maintain a second private reference count assigned for the second core;
a first core and a second core for reading the same data in the data unit; and the number of the first and second electrodes,
in the process that the thread of the first core reads the same data, based on the private reference count corresponding to the first core, the operations of adding a read lock and reading a read lock aiming at the same data are carried out;
and in the process of reading the same data by the thread of the second core, performing read lock adding and reading lock reading operations aiming at the same data based on the private reference count corresponding to the second core.
10. The read lock operating system of claim 9, the first cache unit being a cache of a first core;
the second cache unit is a cache of a second core.
11. The read lock operating system of claim 10, the first cache location corresponding to a first cache line;
the second cache unit corresponds to a second cache line.
12. The system according to claim 9, further comprising a checking unit configured to check whether the same data is in a write operation process, and if not, trigger each core to perform an add-read-lock operation and an read-decode-lock operation on the corresponding private reference count.
13. A write lock operation method is applied to a multi-core computer system and sets different private reference counts corresponding to different cores, and the write lock operation method comprises the following steps:
before the write operation is executed on the specified data, whether a read operation process on the specified data exists in all cores of the computer system is judged by traversing the private reference counts corresponding to the cores;
before the specified data is subjected to write operation, judging whether the specified data is in the process of another write operation based on a global write lock aiming at the specified data;
and if the two judgment results are negative, performing write lock adding and write lock releasing operations by using the global write lock in the process of performing write operation on the specified data.
14. The write lock operation method according to claim 13, wherein the write lock operation with the global write lock specifically includes: updating the value of the global write lock variable to be the ID of the write thread;
the performing write unlock operation with global write lock specifically includes: the value of the global write lock variable is updated to 0.
15. The write lock operation method of claim 14, after updating the value of the global write lock variable to the ID of the write thread and before performing the write operation, further comprising:
checking the private quote count value corresponding to each core at the moment;
performing a write operation comprising:
if the check result is 0, the write operation is performed.
16. A write lock operation system comprises a data unit, a first judgment unit, a second judgment unit, an add/write lock unit, and different private reference counts corresponding to different cores,
a data unit for storing data;
the first judgment unit is used for judging whether a read operation process for the data exists in all cores of the computer system or not by traversing the private reference counts corresponding to the cores before the write operation is executed on the specified data stored in the data unit;
a second judging unit, configured to, before performing a write operation on the specified data, judge whether the data is in the process of another write operation based on a global write lock for the specified data;
and the write locking and unlocking unit is used for performing write locking and write unlocking operations by using the global write lock in the process of performing write operation on the specified data under the condition that the judgment results of the first judgment unit and the second judgment unit are negative.
CN202111082328.9A 2013-10-15 2013-10-15 Read lock operation method, write lock operation method and system Pending CN113835901A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111082328.9A CN113835901A (en) 2013-10-15 2013-10-15 Read lock operation method, write lock operation method and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310482117.3A CN104572568B (en) 2013-10-15 2013-10-15 Read lock operation method, write lock operation method and system
CN202111082328.9A CN113835901A (en) 2013-10-15 2013-10-15 Read lock operation method, write lock operation method and system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201310482117.3A Division CN104572568B (en) 2013-10-15 2013-10-15 Read lock operation method, write lock operation method and system

Publications (1)

Publication Number Publication Date
CN113835901A true CN113835901A (en) 2021-12-24

Family

ID=53088677

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201310482117.3A Active CN104572568B (en) 2013-10-15 2013-10-15 Read lock operation method, write lock operation method and system
CN202111082328.9A Pending CN113835901A (en) 2013-10-15 2013-10-15 Read lock operation method, write lock operation method and system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201310482117.3A Active CN104572568B (en) 2013-10-15 2013-10-15 Read lock operation method, write lock operation method and system

Country Status (1)

Country Link
CN (2) CN104572568B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115202884A (en) * 2022-07-26 2022-10-18 江苏安超云软件有限公司 Method for reading, reading and writing lock of high-performance system based on polling and application

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105094840B (en) * 2015-08-14 2019-01-29 浪潮(北京)电子信息产业有限公司 A kind of atomic operation implementation method and device based on buffer consistency principle
CN105955804B (en) * 2016-04-22 2018-06-05 星环信息科技(上海)有限公司 A kind of method and apparatus for handling distributed transaction
US20180232304A1 (en) * 2017-02-16 2018-08-16 Futurewei Technologies, Inc. System and method to reduce overhead of reference counting
US20180260255A1 (en) * 2017-03-10 2018-09-13 Futurewei Technologies, Inc. Lock-free reference counting
CN108388424B (en) * 2018-03-09 2021-09-21 北京奇艺世纪科技有限公司 Method and device for calling interface data and electronic equipment
CN110704198B (en) * 2018-07-10 2023-05-02 阿里巴巴集团控股有限公司 Data operation method, device, storage medium and processor
CN109271258B (en) 2018-08-28 2020-11-17 百度在线网络技术(北京)有限公司 Method, device, terminal and storage medium for realizing re-entry of read-write lock
CN109656730B (en) * 2018-12-20 2021-02-23 东软集团股份有限公司 Cache access method and device
CN111459691A (en) * 2020-04-13 2020-07-28 中国人民银行清算总中心 Read-write method and device for shared memory
CN111597193B (en) * 2020-04-28 2023-09-26 广东亿迅科技有限公司 Tree data locking and unlocking method
CN111782609B (en) * 2020-05-22 2023-10-13 北京和瑞精湛医学检验实验室有限公司 Method for rapidly and uniformly slicing fastq file
CN111913810B (en) * 2020-07-28 2024-03-19 阿波罗智能技术(北京)有限公司 Task execution method, device, equipment and storage medium in multithreading scene
CN112346879B (en) * 2020-11-06 2023-08-11 网易(杭州)网络有限公司 Process management method, device, computer equipment and storage medium
CN113791916B (en) * 2021-11-17 2022-02-08 支付宝(杭州)信息技术有限公司 Object updating and reading method and device
CN115599575B (en) * 2022-09-09 2024-04-16 ***数智科技有限公司 Novel method for solving concurrent activation and deactivation of cluster logical volumes

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040054861A1 (en) * 2002-09-17 2004-03-18 Harres John M. Method and tool for determining ownership of a multiple owner lock in multithreading environments
CN101039278A (en) * 2007-03-30 2007-09-19 华为技术有限公司 Data management method and system
CN101771600A (en) * 2008-12-30 2010-07-07 北京天融信网络安全技术有限公司 Method for concurrently processing join in multi-core systems
US20100333096A1 (en) * 2009-06-26 2010-12-30 David Dice Transactional Locking with Read-Write Locks in Transactional Memory Systems
CN102999378A (en) * 2012-12-03 2013-03-27 中国科学院软件研究所 Read-write lock implement method
CN103279428A (en) * 2013-05-08 2013-09-04 中国人民解放军国防科学技术大学 Explicit multi-core Cache consistency active management method facing flow application

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3614646B2 (en) * 1998-03-12 2005-01-26 富士通株式会社 Microprocessor, operation processing execution method, and storage medium
CN101854302B (en) * 2010-05-27 2016-08-24 中兴通讯股份有限公司 Message order-preserving method and system
CN102681892B (en) * 2012-05-15 2014-08-20 西安热工研究院有限公司 Key-Value type write-once read-many lock pool software module and running method thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040054861A1 (en) * 2002-09-17 2004-03-18 Harres John M. Method and tool for determining ownership of a multiple owner lock in multithreading environments
CN101039278A (en) * 2007-03-30 2007-09-19 华为技术有限公司 Data management method and system
CN101771600A (en) * 2008-12-30 2010-07-07 北京天融信网络安全技术有限公司 Method for concurrently processing join in multi-core systems
US20100333096A1 (en) * 2009-06-26 2010-12-30 David Dice Transactional Locking with Read-Write Locks in Transactional Memory Systems
CN102999378A (en) * 2012-12-03 2013-03-27 中国科学院软件研究所 Read-write lock implement method
CN103279428A (en) * 2013-05-08 2013-09-04 中国人民解放军国防科学技术大学 Explicit multi-core Cache consistency active management method facing flow application

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115202884A (en) * 2022-07-26 2022-10-18 江苏安超云软件有限公司 Method for reading, reading and writing lock of high-performance system based on polling and application
CN115202884B (en) * 2022-07-26 2023-08-22 江苏安超云软件有限公司 Method for adding read write lock of high-performance system based on polling and application

Also Published As

Publication number Publication date
CN104572568B (en) 2021-07-23
CN104572568A (en) 2015-04-29

Similar Documents

Publication Publication Date Title
CN104572568B (en) Read lock operation method, write lock operation method and system
US8954986B2 (en) Systems and methods for data-parallel processing
US8881153B2 (en) Speculative thread execution with hardware transactional memory
US9563477B2 (en) Performing concurrent rehashing of a hash table for multithreaded applications
CN108139946B (en) Method for efficient task scheduling in the presence of conflicts
US8645963B2 (en) Clustering threads based on contention patterns
US10579413B2 (en) Efficient task scheduling using a locking mechanism
US11170816B2 (en) Reader bias based locking technique enabling high read concurrency for read-mostly workloads
US20160004478A1 (en) Wait-free algorithm for inter-core, inter-process, or inter-task communication
US20180260255A1 (en) Lock-free reference counting
CN112306699A (en) Method and device for accessing critical resource, computer equipment and readable storage medium
CN115686881A (en) Data processing method and device and computer equipment
US20120143838A1 (en) Hierarchical software locking
US10101999B2 (en) Memory address collision detection of ordered parallel threads with bloom filters
US9250977B2 (en) Tiered locking of resources
US10310916B2 (en) Scalable spinlocks for non-uniform memory access
CN112346879B (en) Process management method, device, computer equipment and storage medium
US11074200B2 (en) Use-after-free exploit prevention architecture
WO2015004570A1 (en) Method and system for implementing a dynamic array data structure in a cache line
KR101667426B1 (en) Lock-free memory controller and multiprocessor system using the lock-free memory controller
Shin et al. Strata: Wait-free synchronization with efficient memory reclamation by using chronological memory allocation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40058806

Country of ref document: HK