CN104572568A - Read lock operation method, write lock operation method and system - Google Patents

Read lock operation method, write lock operation method and system Download PDF

Info

Publication number
CN104572568A
CN104572568A CN201310482117.3A CN201310482117A CN104572568A CN 104572568 A CN104572568 A CN 104572568A CN 201310482117 A CN201310482117 A CN 201310482117A CN 104572568 A CN104572568 A CN 104572568A
Authority
CN
China
Prior art keywords
core
read
data
lock
write
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310482117.3A
Other languages
Chinese (zh)
Other versions
CN104572568B (en
Inventor
席华锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Oceanbase Technology Co Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201310482117.3A priority Critical patent/CN104572568B/en
Priority to CN202111082328.9A priority patent/CN113835901A/en
Publication of CN104572568A publication Critical patent/CN104572568A/en
Application granted granted Critical
Publication of CN104572568B publication Critical patent/CN104572568B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Techniques For Improving Reliability Of Storages (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The embodiment of the invention discloses a read lock operation method, a write lock operation method and a system. The read lock operation method comprises the following steps: setting a private reference count corresponding to each of kernels; performing reading processes to the same data at threads of the different kernels; performing read lock locking and read lock unlocking operations to the private reference counts corresponding to the different kernels. With the application of the embodiments of the invention, the threads of the different kernels can be utilized to perform read operation to the same data, and the private reference counts corresponding to the kernels can be independently operated; the private reference counts corresponding to the different kernels are not required to be synchronized at each of the kernels, so that the execution efficiency is promoted; furthermore, the read lock expansibility can be improved; namely, no matter how many threads of the kernels are locked with read lock and unlocked from read lock, and the read lock locking time is barely prolonged, so that the execution efficiency is improved.

Description

Read lock method of operating, write lock operation method and system
Technical field
The application relates to Computer Systems Organization technical field, particularly a kind of read lock method of operating, write lock operation method and system.
Background technology
Thread (thread), being also referred to as Lightweight Process (Lightweight Process, LWP), is the control flow check of certain single order in process, and as the minimum unit that program performs.In the operating system introducing thread, usually using the base unit of process as Resources allocation, using thread as independent operating and the independent base unit dispatched.Thread can concurrence performance, and such as, multiple threads in a process can concurrence performance.Thread in different process also can concurrence performance.Especially, calculate in the computer system of core, such as have in the computer system of multiple core cpu, the thread of different core also can concurrence performance more.
During multiple thread concurrence performance, often need to access same data.From the data that this is accessed, this data sharing is to different threads.When these shared data of multiple thread accesses, need the integrality ensureing these shared data.Such as, shared data can not be revised by two threads simultaneously; A thread can not read the shared data that have modified half.Classical mode uses lock (Lock) mechanism.Such as, carry out for these data add " read lock " in the process of read operation at thread to data, at thread, data are carried out in the process of write operation as these data add " writing lock ".Before process carries out read operation to data, first to this data read lock, after read operation performs and finishes, then separate read lock.Similar, before process carries out write operation to data, first add these data and write lock, after write operation performs and finishes, then solution writes lock.Usual read_ref, as the reference count of read operation thread, represents the ID of write operation thread with writer_ID.
What perform same data for different threads is all read operation, can add repeatedly read lock.Such as, thread 1 will perform read operation to data, then, before execution read operation, to this data read lock, the data type specifically value of read_ref being added 1(such as read_ref is shaping, and initial value is 0), afterwards these data are read.In the process read, thread 2 also will perform read operation to same data, then the value of read_ref is added 1, and read these data.Then now the value of read_ref is 2.After the read operation of thread 1 is finished, the value of read_ref is subtracted 1, and separates read lock.Now, the value of read_ref is 1.Afterwards, the read operation of thread 2 to these data is finished, and the value of read_ref is subtracted 1, and separates read lock.Now, the value of read_ref is 0.Can repeat to add to the read lock of same data, therefore, between read lock, there is sharing.
What perform same data for different threads is all write operation, can only add and once write lock.Such as, thread 1 will perform write operation to data, then, before execution write operation, add these data and write lock, the data type specifically value of writer_ID being updated to the ID(such as writer_ID of thread 1 is shaping, and initial value is 0; The ID of arbitrary thread is not 0), afterwards write operation is carried out to these data.In the process write, thread 2 also will perform write operation to same data, but because of writer_ID at time be not 0, therefore thread 2 can not add and writes lock, can not carry out write operation to these data.After the write operation of thread 1 is finished, solution writes lock, and the value by writer_ID is updated to 0.Thread 2 previous add to write to lock unsuccessfully and after waiting for a period of time, learn writer_ID at time be 0, can add and write lock.Afterwards, the value of writer_ID is updated to the ID of thread 2, carries out write operation afterwards to these data.After the write operation of thread 2 is finished, solution writes lock, and the value by writer_ID is updated to 0.Visible, writing lock and cannot repeat to add same data, therefore, writes between lock and has alternative.
In addition, write lock and read lock is also mutual exclusion, namely at any one time, read lock added to same data and just can not add again and write lock, added write lock can not read lock again.Like this, a certain thread, before carrying out read operation to data, needs whether the writer_ID value checking these data is 0.If be 0, just read operation can be carried out; If be not 0, need to wait for that writer_ID value becomes 0.Similar, a certain thread, before carrying out write operation to data, needs whether the read_ref value checking these data is 0.If be 0, just write operation can be carried out; If be not 0, need to wait for that read_ref value becomes 0.In fact, for read lock, under more susceptible condition, be write operation and latching operation is write in adding of carrying out to avoid having another thread simultaneously between the operation checking writer_ID value and corresponding read operation further, namely the collision detection caused in this kind of situation was lost efficacy, whether add after 1 by the value of read_ref, be also 0 by this writer_ID value again checked now.If be not 0, just perform read operation.Similar, lock is write for adding, under more susceptible condition, be read operation and the read lock that carries out operates to avoid having another thread simultaneously between the operation checking read_ref value and corresponding write operation further, namely the collision detection caused in this kind of situation was lost efficacy, after the value of writer_ID is updated to the ID writing thread, whether be also 0 by this read_ref value again checked now.If be not 0, just perform write operation.
Above-mentioned thread is changed the value of read_ref value, writer_ID, belongs to atomic operation.Atomic operation is generally the instruction that CPU provides, and has inseparability.A thread, when an execution atomic operation, can not be interrupted by other thread, can not be switched to other threads.In other words, this atomic operation once, just run to this operation always and terminate.
Realizing in the application's process, inventor finds that in prior art, at least there are the following problems:
Calculate in the computer system of core, the thread of different core may perform reading and writing operation to same data more.Particularly often there will be in a period of time and a large amount of read operation is performed to same data, and there is no the situation of write operation.The general corresponding buffer memory (cache) of each core.Each core safeguards a read_ref value in the buffer memory of its correspondence.Further, the implementation conventionally, the read_ref value in the buffer memory that each core is corresponding will be consistent.Like this, for the computer system calculating core, the read_ref value in the buffer memory that core is corresponding, with other core communication to notify this change once after changing more.Read_ref value in self corresponding buffer memory of the notified rear renewal of other core.
Like this, this mode of the prior art, when causing multiple threads of different core to carry out read operation to same data, because the communication between core will spend certain hour, the atomic operation changing read_ref value in the corresponding buffer memory of each core needs to spend certain hour, and thus execution efficiency is lower.
Summary of the invention
The object of the embodiment of the present application is to provide a kind of read lock method of operating, writes lock operation method and system, to improve execution efficiency.
For solving the problems of the technologies described above, the embodiment of the present application provides a kind of read lock method of operating, write lock operation method and system is achieved in that
A kind of read lock method of operating, comprising:
The privately owned reference count that each core is corresponding is set;
Carry out in reading process at the thread of different core to same data, carry out read lock with the privately owned reference count that different core is corresponding, understand latching operation.
A kind of read lock operating system, comprises data cell, a buffer unit, the second buffer unit, and first calculates core and second calculates core, wherein,
Data cell, for storing data;
First buffer unit, for saving as the first privately owned reference count of the distribution of the first calculating core;
Second buffer unit, for saving as the second privately owned reference count of the distribution of the second calculating core;
First calculates core and second calculates core, for reading the same data in described data cell; And,
First thread calculating core carries out in reading process to described data, carries out read lock with the privately owned reference count that the first core is corresponding, understands latching operation;
Second thread calculating core carries out in reading process to described data, carries out read lock with the privately owned reference count that the second core is corresponding, understands latching operation.
One writes lock operation method, comprising:
Before write operation is performed to data, judge the read operation process whether existed in all calculating cores these data;
Before write operation is performed to data, judge whether described data are in another write operation process;
If above-mentioned two judged results are no, then these data are carried out in the process of write operation, with described global write be locked into row add write lock, operation that solution writes lock.
One writes latching operation system, comprises data cell, the first judging unit, the second judging unit, adds, solution writes lock unit, wherein,
Data cell, for storing data;
First judging unit, for before performing write operation to data, judges the read operation process whether existed in all calculating cores these data;
Second judging unit, for before performing write operation to data, judges whether described data are in another write operation process;
Add, solution write lock unit, for when the judged result of the first judging unit and the second judging unit is all no, described data are being carried out in the process of write operation, with described global write be locked into row add write lock, operation that solution writes lock.
The technical scheme provided from above the embodiment of the present application, when the embodiment of the present application makes the thread of different core carry out read operation to same data, independently corresponding to this core privately owned reference count operation.The privately owned reference count that these different cores are corresponding does not need between each core synchronous, and therefore execution efficiency gets a promotion.And the extendability of read lock is also improved, namely no matter the thread of how many cores concurrently adds, separates read lock, and adding the time of separating read lock nearly all can not increase, thus improves execution efficiency.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present application or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, the accompanying drawing that the following describes is only some embodiments recorded in the application, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the process flow diagram of the application's read lock method of operating embodiment;
Fig. 2 is the module map of the application's read lock operating system embodiment;
Fig. 3 is the process flow diagram that the application writes a lock operation method embodiment;
Fig. 4 is the module map that the application writes a latching operation system embodiment.
Embodiment
The embodiment of the present application provides a kind of read lock method of operating, writes lock operation method and system.
Technical scheme in the application is understood better in order to make those skilled in the art person, below in conjunction with the accompanying drawing in the embodiment of the present application, technical scheme in the embodiment of the present application is clearly and completely described, obviously, described embodiment is only some embodiments of the present application, instead of whole embodiments.Based on the embodiment in the application, those of ordinary skill in the art are not making the every other embodiment obtained under creative work prerequisite, all should belong to the scope of the application's protection.
First an embodiment of the application's read lock method of operating is introduced.
Fig. 1 shows the process flow diagram of the application's read lock method of operating embodiment.As shown in Figure 1, the method for this embodiment comprises:
S100: the privately owned reference count that each core is corresponding is set.
Modern CPU have employed the delay that a large amount of technology is brought to offset internal storage access.During read/write memory data, CPU can perform hundreds and thousands of instructions.Multistage static RAM (Static Random Access Memory, SRAM) buffer memory (hereinafter referred to as buffer memory) is the Main Means reducing the impact that this delay brings.
Such as, for the computer system of double-core, core 1 and core 2 have corresponding buffer memory 1 and buffer memory 2 respectively.Described buffer memory can be the buffer memory calculating core.Such as, for CPU, CPU inside is usually with level cache, L2 cache, and the CPU even had is also with three grades of buffer memorys.The CPU comprising level cache, L2 cache, CPU are needed to the data of operation, a kind of situation is first read into L2 cache from internal memory, then be read into level cache from L2 cache, then be read into CPU from level cache and perform.In general, the closer to the storer of CPU, speed is faster, but cost is also higher; More away from the storer of CPU, speed is slower, but cost is also more cheap.Generally in the storer near CPU, store CPU and read and write data frequently, to improve the utilization factor of the high storer of cost.
In this step, preferably, privately owned reference count (private_read_ref) can be placed in buffer memory.Such as, privately owned reference count can be arranged in the level cache of CPU.Certainly, depending on the architectural framework of CPU and the capacity of storer not at the same level, also can privately owned reference count be arranged in L2 cache, other reading speed and CPU atomic operation speed can also be arranged on in the storer of magnitude.The embodiment of the present application is not particularly limited to this.In fact, buffer memory is often transparent to program, and that is whether program can not control a variable and will be put in buffer memory, and will be placed on which rank of buffer memory.When program will operate a variable, CPU can check this variable whether in level cache, if had, directly reads from first-level buffer; If no, can check whether in L2 cache: if had, variable is loaded into level cache from L2 cache, if L2 cache does not have, variable is loaded in L2 cache and level cache from internal memory yet.
In prior art, different threads, to the read operation of same data, relates to same reference count, namely operates same reference count.This reference count read_ref, is called overall read_ref according to computer realm general rule.Particularly the different threads of same calculating core, comprises the different threads in same process, or the different threads in different process, when carrying out read operation to same data, carries out from adding (++) or certainly subtracting (--) operation same overall read_ref.If in multi-core computer system, for the situation of multiple core, still only adopt a global reference count, then there will be the problem obtained as analyzed in background technology.
In this step, for different core, for each core arranges privately owned application count.Such as, for core 1, arranging the privately owned reference count of its correspondence, such as, is read_ref_core1; For core 2, also arranging the privately owned reference count of its correspondence, such as, is read_ref_core2.For the situation also comprising other core, can the rest may be inferred.
For the privately owned reference count that each core is corresponding, can not forever (or being referred to as to fix) distribute, but interim distribution.Such as, can to carry out data distributing before read lock first at the thread of each core; This privately owned reference count is regained after the read operation of thread to these data of this core is finished.Concrete, the array [read_ref] of a privately owned reference count can be set.To carry out first before read lock to data at the thread of each core, application distributes should in [read_ref] array one.It is enough large that the space of this array [read_ref] can be arranged.In array, each can be set to shaping (int).In array, the initial value of each can be initialized as 0.Certainly, for the read operation of a certain data, also can by each fixed allocation in [read_ref] array to each core.
Preferably, in practical operation, each in [read_ref] array can be assigned in a cache lines in buffer memory (cache line).Cache lines is the minimum unit that multi-core CPU safeguards buffer consistency, is also the effective unit that internal memory exchanges.In the middle of reality, a cache lines on most of platform is greater than 8 bytes, and most cache lines is 64 bytes.If [read_ref] array define is int type, be then 8 bytes.Visible, a cache lines can store 8 read_ref.If store the situation of a more than read_ref in a cache lines, then just have conflict during different element in groups of operands.In order to avoid conflict, each read_ref in [read_ref] array can be stored in a cache lines.Such as, each in [read_ref] array can be claimed as a structure, state that a structure size is 64 bytes simultaneously.Like this, each in [read_ref] array all monopolizes a cache lines, can avoid producing conflict during operation.
S110: carry out in reading process to same data at the thread of different core, carries out read lock with the privately owned reference count that different core is corresponding, understands latching operation.
Such as, same computer system comprises 2 and calculates core, is respectively core 1 and core 2.Again such as, core 1 and core 2 all will read same data.According to S100, core 1 can apply for 1 privately owned reference count, is labeled as read_ref_core1; Similar, core 2 also respectively can apply for 1 privately owned reference count, as being read_ref_core2.
Like this, carry out, in reading process, first carrying out read lock to these data at the thread of core 1.That is, the privately owned reference count read_ref_core1 of core 1 performs and adds 1 operation.Like this, read_ref_core1 becomes 1 from initial value 0.Afterwards, the thread of core 1 reads these data.After read operation is finished, carry out deciphering latching operation.That is, the privately owned reference count read_ref_core1 of core 1 performs and subtracts 1 operation.Like this, read_ref_core1 becomes 0 from 1.
Similar, carry out, in reading process, first carrying out read lock to these data at the thread of core 2.That is, the privately owned reference count read_ref_core2 of core 2 performs and adds 1 operation.Like this, read_ref_core2 becomes 1 from initial value 0.Afterwards, the thread of core 2 reads these data.After read operation is finished, carry out deciphering latching operation.That is, the privately owned reference count read_ref_core2 of core 2 performs and subtracts 1 operation.Like this, read_ref_core2 becomes 0 from 1.
The aforesaid way that the embodiment of the present application adopts, when making the thread of different core carry out read operation to same data, independently corresponding to this core privately owned reference count operation.The privately owned reference count that these different cores are corresponding does not need between each core synchronous, and therefore execution efficiency gets a promotion.And the extendability of read lock is also improved, namely no matter the thread of how many cores concurrently adds, separates read lock, and adding the time of separating read lock nearly all can not increase.
In addition, the privately owned reference count that different core is corresponding does not need between each core synchronous, eliminates the communication process between each core, thus eliminates the expense of the bandwidth, time etc. needed for intercore communication.
Described S110 specifically can comprise the steps:
The thread of the S111: the first core carries out in reading process to data, carries out read lock with the privately owned reference count that the first core is corresponding, understands latching operation.
The thread of the S112: the second core carries out in reading process to described data, carries out read lock with the privately owned reference count that the second core is corresponding, understands latching operation.
The above-mentioned read lock operation carried out with privately owned reference count, specifically comprises: the process of different core performs with the privately owned reference count that each core is corresponding and adds 1 operation.The deciphering latching operation carried out with privately owned reference count, specifically comprises: the process of different core performs with the privately owned reference count that each core is corresponding and subtracts 1 operation.Conciliate between read lock operation at read lock, the process of each core can read described data.
It should be noted that, multiple different threads of same core carry out the process of read operation to same data, can perform read lock, understand latching operation with same privately owned counter.Such as, carry out, in reading process, first carrying out read lock to these data at the thread 1 of core 1.The privately owned reference count read_ref_core1 of core 1 performs and adds 1 operation.Like this, read_ref_core1 becomes 1 from initial value 0.Afterwards, the thread 1 of core 1 reads these data.In the process that the thread 1 of core 1 reads these data, the thread 2 of core 1 also will perform read operation to same data, then the value of read_ref_core1 is added 1, and read these data.Then now the value of read_ref is 2.After the read operation of the thread 1 of core 1 is finished, the value of read_ref_core1 is subtracted 1, separate read lock.Now, the value of read_ref_core1 becomes 1.Afterwards, the read operation of thread 2 to these data of core 1 is finished, and the value of read_ref_core1 is subtracted 1, separates read lock.Now, the value of read_ref_core1 becomes 0.Like this, for same core, no matter its how many threads concurrently add, separate read lock, and adding the time of separating read lock nearly all can not increase.
Also it should be noted that, in order to avoid the inconsistent situation of data occurs, the read lock in the embodiment of the present application, still with write that to lock be mutual exclusion.Such as, have in the computer system of multiple core, a global write lock is set, as being global_writer_id.A certain thread will perform write operation to data, then, before execution write operation, add write lock to these data.Such as, the data type that the value of global_writer_id is updated to the ID(such as global_writer_id of thread 1 by the thread 1 of a certain core is shaping, and initial value is 0; The ID of arbitrary thread is not 0), afterwards write operation is carried out to these data.In the process write, the thread of a certain core (can be same core or different core with the last thread writing lock that adds), referred to herein as thread 2, will perform read operation to same data, and application is to should the privately owned reference count of core.This privately owned reference count is such as initialized as 0.But, because of global_writer_id at time be not 0, therefore thread 2 can not read lock, can not carry out read operation to these data.After the write operation of thread 1 is finished, solution writes lock, and the value by global_writer_id is updated to 0.Thread 2 is in the failure of previous read lock and after waiting for a period of time, learn global_writer_id at time be 0, can read lock.Thread 2 also may, after previous read lock failure, be attempted carrying out read lock at regular intervals; When the value of global_writer_id is 0, the success of retry read lock.Like this, the value of the privately owned reference count that thread 2 is applied for adds 1, carries out read operation afterwards to these data.Now the value of the privately owned reference count of thread 2 is 1.After the read operation of thread 2 is finished, separate read lock, the value by the privately owned reference count of its correspondence carries out subtracting 1 operation, becomes 0.
Based on this, before in S110, read lock operation is carried out in the privately owned reference count of thread to correspondence of different core, can also comprise:
S101: the thread of different core checks whether described data are in write operation process, triggers when check result is no and performs S110.
Whether be in write operation process, can by checking that the state of global write lock realizes.Such as, can check whether global write lock is 0, and perform S110 when check result is 0.
Otherwise, if the value checking global write lock is not 0, then mean current write operation to be existed to these data.Then based on the aforementioned mutual exclusion row writing lock and read lock, now to this data read lock, also just read operation can not can not be carried out to these data.In this situation, need to wait for that global write lock becomes after 0, then perform S110.
This S101 performs after S100, also can be performed before S100.
It should be noted that, for read lock, be write operation and latching operation is write in adding of carrying out to avoid having another thread simultaneously between the operation checking global_writer_id value and corresponding read operation further, namely the collision detection caused in this kind of situation was lost efficacy, whether add after 1 by the value of privately owned reference count corresponding for this core, be also 0 by this global_writer_id value again checked now.If be not 0, just perform read operation.
Below introduce an embodiment of the application's read lock operating system.Fig. 2 shows the module map of this system embodiment.
As shown in Figure 2, the read lock operating system in the application one embodiment comprises the first calculating core 11a, and second calculates core 11b, the first buffer unit 12a, the second buffer unit 12b, data cell 13, the buffer unit that each described calculating core correspondence one is unique.
Wherein:
Data cell 13, for storing data;
First buffer unit 12a, for saving as the first privately owned reference count of the distribution of the first calculating core;
Second buffer unit 12b, for saving as the second privately owned reference count of the distribution of the second calculating core;
First calculates core 11a and second calculates core 11b, for reading the same data in described data cell; And,
First thread calculating core 11a carries out in reading process to described data, carries out read lock with the privately owned reference count that the first core is corresponding, understands latching operation;
Second thread calculating core 11b carries out in reading process to described data, carries out read lock with the privately owned reference count that the second core is corresponding, understands latching operation.
Wherein:
Described first buffer unit 12a can be the buffer memory of the first calculating core;
Described second buffer unit 12b can be the buffer memory of the second calculating core.
Mention in embodiment of the method above, the privately owned reference count that each core is corresponding is set, can to carry out data privately owned reference count corresponding to this core distributing before read lock first at the thread of each core, or also can the private reference count of each core of fixed allocation.Such as, the array [read_ref] of a privately owned reference count can be set.To carry out first before read lock to data at the thread of each core, application distributes should in [read_ref] array one.It is enough large that the space of this array [read_ref] can be arranged.In array, each can be set to shaping (int).In array, the initial value of each can be initialized as 0.Certainly, for the read operation of a certain data, also can by each fixed allocation in [read_ref] array to each core.Preferably, in practical operation, each in [read_ref] array can be assigned in a cache lines in buffer memory (cacheline).Cache lines is the minimum unit that multi-core CPU safeguards buffer consistency, is also the effective unit that internal memory exchanges.In the middle of reality, a cache lines on most of platform is greater than 8 bytes, and most cache lines is 64 bytes.If [read_ref] array define is int type, be then 8 bytes.Visible, a cache lines can store 8 read_ref.If store the situation of a more than read_ref in a cache lines, then just have conflict during different element in groups of operands.In order to avoid conflict, each read_ref in [read_ref] array can be stored in a cache lines.Such as, each in [read_ref] array can be claimed as a structure, state that a structure size is 64 bytes simultaneously.Like this, each in [read_ref] array all monopolizes a cache lines, can avoid producing conflict during operation.
In conjunction with foregoing, in an embodiment of the application's read lock operating system, the buffer memory of different core can in corresponding different cache lines.Such as, corresponding first cache lines of described first buffer unit, corresponding second cache lines of described second buffer unit.
In described read lock operating system embodiment, inspection unit 14 can also be comprised, for checking whether described data are in write operation process, and trigger read lock that each calculating core carries out corresponding privately owned reference count in a case of no, understand latching operation.
The above-mentioned read lock operation carried out privately owned reference count, specifically comprises: the privately owned reference count execution that the process of each core is corresponding to this core adds 1 operation.To the deciphering latching operation that privately owned reference count is carried out, specifically comprise: the privately owned reference count execution that the process of each core is corresponding to this core subtracts 1 operation.Conciliate between read lock operation at read lock, the process of each core can read described data.
Below introduce the embodiment that the application writes lock operation method.Fig. 3 shows the process flow diagram of the method embodiment.As shown in Figure 3, the lock operation method embodiment of writing of the application comprises:
S300: before write operation is performed to data, judge the read operation process whether existed in all calculating cores these data.
Describedly judge whether all calculating cores exist the read procedure to these data, concrete, can by convenient each calculate core to should the privately owned reference count of data whether be 0 realization.If be 0, illustrate that these data are in read operation process; If be not 0, then illustrate that these data are not in read operation process.
S310: before write operation is performed to data, judge whether described data are in another write operation process.
S310, concrete can by judging whether the global write lock for these data is 0 realization.If be 0, illustrate that these data are not in another write operation process; If be not 0, illustrate that another write operation process exists.
S320: if the judged result of above-mentioned S310, S320 is no, then carry out in the process of write operation to these data, with described global write be locked into row add write lock, operation that solution writes lock.
Concrete, before carrying out write operation, these data are added and writes lock; After writing, lock is write to this data solution.
Global variable in S320 is such as global_writer_id.Adding and write lock, can be that the value of this global_writer_id is updated to the ID writing thread; Solution writes lock, can be the value of this global_writer_id is updated to 0.
Similar, lock is write for adding, be read operation and the read lock that carries out operates to avoid having another thread simultaneously between the operation checking each core privately owned reference count value and corresponding write operation further, namely the collision detection caused in this kind of situation was lost efficacy, after the value of global_writer_id is updated to the ID writing thread, also again will check now whether the privately owned reference count value of each core is 0.If be not 0, just perform write operation.
Above-mentioned lock operation method of writing can based on aforementioned read lock method of operating or read lock operating system.
Below introduce the embodiment that the application writes latching operation system.Fig. 4 shows the module map of this system embodiment.As shown in Figure 4, the latching operation system embodiment of writing of the application comprises:
Data cell 3, for storing data;
First judging unit 21a, for before performing write operation to data, judges the read operation process whether existed in all calculating cores these data;
Second judging unit 21b, for before performing write operation to data, judges whether described data are in another write operation process;
Add, solution write lock unit 22, for when the judged result of the first judging unit and the second judging unit is all no, described data are being carried out in the process of write operation, with described global write be locked into row add write lock, operation that solution writes lock.
Concrete, before carrying out write operation, these data are added and writes lock; After writing, lock is write to this data solution.
Global variable is such as global_writer_id.Adding and write lock, can be that the value of this global_writer_id is updated to the ID writing thread; Solution writes lock, can be the value of this global_writer_id is updated to 0.
Above-mentioned latching operation system of writing can based on aforementioned read lock method of operating or read lock operating system.
System, device, module or unit that above-described embodiment is illustrated, specifically can be realized by computer chip or entity, or be realized by the product with certain function.
For convenience of description, various unit is divided into describe respectively with function when describing above device.Certainly, the function of each unit can be realized in same or multiple software and/or hardware when implementing the application.
As seen through the above description of the embodiments, those skilled in the art can be well understood to the mode that the application can add required general hardware platform by software and realizes.Based on such understanding, the technical scheme of the application can embody with the form of software product the part that prior art contributes in essence in other words, this computer software product can be stored in storage medium, as ROM/RAM, magnetic disc, CD etc., comprising some instructions in order to make a computer equipment (can be personal computer, server, or the network equipment etc.) perform the method described in some part of each embodiment of the application or embodiment.
Each embodiment in this instructions all adopts the mode of going forward one by one to describe, between each embodiment identical similar part mutually see, what each embodiment stressed is the difference with other embodiments.Especially, for system embodiment, because it is substantially similar to embodiment of the method, so description is fairly simple, relevant part illustrates see the part of embodiment of the method.
The application can be used in numerous general or special purpose computing system environments or configuration.Such as: personal computer, server computer, handheld device or portable set, laptop device, multicomputer system, system, set top box, programmable consumer-elcetronics devices, network PC, small-size computer, mainframe computer, the distributed computing environment comprising above any system or equipment etc. based on microprocessor.
The application can describe in the general context of computer executable instructions, such as program module.Usually, program module comprises the routine, program, object, assembly, data structure etc. that perform particular task or realize particular abstract data type.Also can put into practice the application in a distributed computing environment, in these distributed computing environment, be executed the task by the remote processing devices be connected by communication network.In a distributed computing environment, program module can be arranged in the local and remote computer-readable storage medium comprising memory device.
Although depict the application by embodiment, those of ordinary skill in the art know, the application has many distortion and change and do not depart from the spirit of the application, and the claim appended by wishing comprises these distortion and change and do not depart from the spirit of the application.

Claims (16)

1. a read lock method of operating, is characterized in that, comprising:
The privately owned reference count that each core is corresponding is set;
Carry out in reading process at the thread of different core to same data, carry out read lock with the privately owned reference count that different core is corresponding, understand latching operation.
2. read lock method of operating as claimed in claim 1, it is characterized in that, the described privately owned reference count arranging each core corresponding comprises:
To carry out data privately owned reference count corresponding to this core distributing before read lock first at the thread of each core; Or,
The private reference count of each core of fixed allocation.
3. read lock method of operating as claimed in claim 1 or 2, is characterized in that, privately owned reference count corresponding to each core of described distribution comprises:
The array of privately owned reference count is set, each in this array is dispensed to a core.
4. read lock method of operating as claimed in claim 3, is characterized in that, privately owned reference count corresponding to each core of described distribution comprises:
Each in reference count array is assigned in a cache lines in buffer memory.
5. read lock method of operating as claimed in claim 1, it is characterized in that, the described thread at different core carries out in reading process to same data, carries out read lock, understands latching operation, comprising with the privately owned reference count that different core is corresponding:
The thread of the first core carries out in reading process to data, carries out read lock with the privately owned reference count that the first core is corresponding, understands latching operation;
The thread of the second core carries out in reading process to described data, carries out read lock with the privately owned reference count that the second core is corresponding, understands latching operation.
6. read lock method of operating as claimed in claim 1, is characterized in that,
Carry out read lock operation with the privately owned reference count that different core is corresponding in described S2, specifically comprise: the process of different core performs with the privately owned reference count that each core is corresponding and adds 1 operation;
With the deciphering latching operation that the corresponding privately owned reference count of different core is carried out in described S2, specifically comprise: the process of different core performs with the privately owned reference count that each core is corresponding and subtracts 1 operation.
7. read lock method of operating as claimed in claim 1, it is characterized in that, the privately owned reference count that described different core is corresponding also comprises before carrying out read lock:
The thread of different core checks whether described data are in write operation process, and check result is no.
8. read lock method of operating as claimed in claim 6, is characterized in that, the process of described different core before performing read operation, also comprises after performing add 1 operation with the privately owned reference count that each core is corresponding:
The thread of different core checks whether described data are in write operation process, and check result is no.
9. a read lock operating system, is characterized in that, comprises data cell, a buffer unit, the second buffer unit, and first calculates core and second calculates core, wherein,
Data cell, for storing data;
First buffer unit, for saving as the first privately owned reference count of the distribution of the first calculating core;
Second buffer unit, for saving as the second privately owned reference count of the distribution of the second calculating core;
First calculates core and second calculates core, for reading the same data in described data cell; And,
First thread calculating core carries out in reading process to described data, carries out read lock with the privately owned reference count that the first core is corresponding, understands latching operation;
Second thread calculating core carries out in reading process to described data, carries out read lock with the privately owned reference count that the second core is corresponding, understands latching operation.
10. read lock operating system as claimed in claim 9, is characterized in that:
Described first buffer unit is the buffer memory of the first calculating core;
Described second buffer unit is the buffer memory of the second calculating core.
11. read lock operating systems as claimed in claim 10, is characterized in that:
Corresponding first cache lines of described first buffer unit;
Corresponding second cache lines of described second buffer unit.
12. read lock operating systems as claimed in claim 9, it is characterized in that, also comprise inspection unit, for checking whether described data are in write operation process, and trigger read lock that each calculating core carries out corresponding privately owned reference count in a case of no, understand latching operation.
Write lock operation method, it is characterized in that, comprising for 13. 1 kinds:
Before write operation is performed to data, judge the read operation process whether existed in all calculating cores these data;
Before write operation is performed to data, judge whether described data are in another write operation process;
If above-mentioned two judged results are no, then these data are carried out in the process of write operation, with described global write be locked into row add write lock, operation that solution writes lock.
14. write lock operation method as claimed in claim 13, it is characterized in that,
Describedly be locked into row with global write and add and write latching operation and specifically comprise: the value of this global write lock variable is updated to the ID writing thread;
Describedly be locked into row solution with global write and write latching operation and specifically comprise: the value of this global write lock variable is updated to 0.
15. write lock operation method as claimed in claim 14, it is characterized in that, after the value of global write being locked variable is updated to the ID writing thread, before performing write operation, also comprise:
Check the now privately owned reference count value of each core, and check result is 0.
Write latching operation system, it is characterized in that, comprise data cell for 16. 1 kinds, the first judging unit, the second judging unit, add, solution writes lock unit, wherein,
Data cell, for storing data;
First judging unit, for before performing write operation to data, judges the read operation process whether existed in all calculating cores these data;
Second judging unit, for before performing write operation to data, judges whether described data are in another write operation process;
Add, solution write lock unit, for when the judged result of the first judging unit and the second judging unit is all no, described data are being carried out in the process of write operation, with described global write be locked into row add write lock, operation that solution writes lock.
CN201310482117.3A 2013-10-15 2013-10-15 Read lock operation method, write lock operation method and system Active CN104572568B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201310482117.3A CN104572568B (en) 2013-10-15 2013-10-15 Read lock operation method, write lock operation method and system
CN202111082328.9A CN113835901A (en) 2013-10-15 2013-10-15 Read lock operation method, write lock operation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310482117.3A CN104572568B (en) 2013-10-15 2013-10-15 Read lock operation method, write lock operation method and system

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202111082328.9A Division CN113835901A (en) 2013-10-15 2013-10-15 Read lock operation method, write lock operation method and system

Publications (2)

Publication Number Publication Date
CN104572568A true CN104572568A (en) 2015-04-29
CN104572568B CN104572568B (en) 2021-07-23

Family

ID=53088677

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201310482117.3A Active CN104572568B (en) 2013-10-15 2013-10-15 Read lock operation method, write lock operation method and system
CN202111082328.9A Pending CN113835901A (en) 2013-10-15 2013-10-15 Read lock operation method, write lock operation method and system

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202111082328.9A Pending CN113835901A (en) 2013-10-15 2013-10-15 Read lock operation method, write lock operation method and system

Country Status (1)

Country Link
CN (2) CN104572568B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105094840A (en) * 2015-08-14 2015-11-25 浪潮(北京)电子信息产业有限公司 Atomic operation implementation method and device based on cache consistency principle
WO2017181931A1 (en) * 2016-04-22 2017-10-26 星环信息科技(上海)有限公司 Method and device for processing distributed transaction
CN108388424A (en) * 2018-03-09 2018-08-10 北京奇艺世纪科技有限公司 A kind of method, apparatus and electronic equipment of calling interface data
WO2018161844A1 (en) * 2017-03-10 2018-09-13 Huawei Technologies Co., Ltd. Lock-free reference counting
CN109271258A (en) * 2018-08-28 2019-01-25 百度在线网络技术(北京)有限公司 Implementation method, device, terminal and the storage medium that Read-Write Locks are reentried
CN109656730A (en) * 2018-12-20 2019-04-19 东软集团股份有限公司 A kind of method and apparatus of access cache
CN110249303A (en) * 2017-02-16 2019-09-17 华为技术有限公司 System and method for reducing reference count expense
CN110704198A (en) * 2018-07-10 2020-01-17 阿里巴巴集团控股有限公司 Data operation method, device, storage medium and processor
CN111459691A (en) * 2020-04-13 2020-07-28 中国人民银行清算总中心 Read-write method and device for shared memory
CN111597193A (en) * 2020-04-28 2020-08-28 广东亿迅科技有限公司 Method for locking and unlocking tree-shaped data
CN111782609A (en) * 2020-05-22 2020-10-16 北京和瑞精准医学检验实验室有限公司 Method for rapidly and uniformly fragmenting fastq file
CN111913810A (en) * 2020-07-28 2020-11-10 北京百度网讯科技有限公司 Task execution method, device, equipment and storage medium under multi-thread scene
CN112346879A (en) * 2020-11-06 2021-02-09 网易(杭州)网络有限公司 Process management method and device, computer equipment and storage medium
CN113791916A (en) * 2021-11-17 2021-12-14 支付宝(杭州)信息技术有限公司 Object updating and reading method and device
CN115599575A (en) * 2022-09-09 2023-01-13 ***数智科技有限公司(Cn) Novel method for solving concurrent activation and deactivation of cluster logical volume

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115202884B (en) * 2022-07-26 2023-08-22 江苏安超云软件有限公司 Method for adding read write lock of high-performance system based on polling and application

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6292881B1 (en) * 1998-03-12 2001-09-18 Fujitsu Limited Microprocessor, operation process execution method and recording medium
CN101039278A (en) * 2007-03-30 2007-09-19 华为技术有限公司 Data management method and system
CN101854302A (en) * 2010-05-27 2010-10-06 中兴通讯股份有限公司 Message order-preserving method and system
CN102681892A (en) * 2012-05-15 2012-09-19 西安热工研究院有限公司 Key-Value type write-once read-many lock pool software module and running method thereof
CN102999378A (en) * 2012-12-03 2013-03-27 中国科学院软件研究所 Read-write lock implement method
CN103279428A (en) * 2013-05-08 2013-09-04 中国人民解放军国防科学技术大学 Explicit multi-core Cache consistency active management method facing flow application

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6886081B2 (en) * 2002-09-17 2005-04-26 Sun Microsystems, Inc. Method and tool for determining ownership of a multiple owner lock in multithreading environments
CN101771600B (en) * 2008-12-30 2012-12-12 北京天融信网络安全技术有限公司 Method for concurrently processing join in multi-core systems
US8973004B2 (en) * 2009-06-26 2015-03-03 Oracle America, Inc. Transactional locking with read-write locks in transactional memory systems

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6292881B1 (en) * 1998-03-12 2001-09-18 Fujitsu Limited Microprocessor, operation process execution method and recording medium
CN101039278A (en) * 2007-03-30 2007-09-19 华为技术有限公司 Data management method and system
CN101854302A (en) * 2010-05-27 2010-10-06 中兴通讯股份有限公司 Message order-preserving method and system
CN102681892A (en) * 2012-05-15 2012-09-19 西安热工研究院有限公司 Key-Value type write-once read-many lock pool software module and running method thereof
CN102999378A (en) * 2012-12-03 2013-03-27 中国科学院软件研究所 Read-write lock implement method
CN103279428A (en) * 2013-05-08 2013-09-04 中国人民解放军国防科学技术大学 Explicit multi-core Cache consistency active management method facing flow application

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105094840B (en) * 2015-08-14 2019-01-29 浪潮(北京)电子信息产业有限公司 A kind of atomic operation implementation method and device based on buffer consistency principle
CN105094840A (en) * 2015-08-14 2015-11-25 浪潮(北京)电子信息产业有限公司 Atomic operation implementation method and device based on cache consistency principle
WO2017181931A1 (en) * 2016-04-22 2017-10-26 星环信息科技(上海)有限公司 Method and device for processing distributed transaction
US11023446B2 (en) 2016-04-22 2021-06-01 Transwarp Technology (Shanghai) Co., Ltd. Method and device for processing distributed transaction
CN110249303A (en) * 2017-02-16 2019-09-17 华为技术有限公司 System and method for reducing reference count expense
CN110352406A (en) * 2017-03-10 2019-10-18 华为技术有限公司 Without lock reference count
WO2018161844A1 (en) * 2017-03-10 2018-09-13 Huawei Technologies Co., Ltd. Lock-free reference counting
CN108388424A (en) * 2018-03-09 2018-08-10 北京奇艺世纪科技有限公司 A kind of method, apparatus and electronic equipment of calling interface data
CN110704198A (en) * 2018-07-10 2020-01-17 阿里巴巴集团控股有限公司 Data operation method, device, storage medium and processor
CN110704198B (en) * 2018-07-10 2023-05-02 阿里巴巴集团控股有限公司 Data operation method, device, storage medium and processor
CN109271258A (en) * 2018-08-28 2019-01-25 百度在线网络技术(北京)有限公司 Implementation method, device, terminal and the storage medium that Read-Write Locks are reentried
US11119832B2 (en) 2018-08-28 2021-09-14 Baidu Online Network Technology (Beijing) Co., Ltd. Method and device for implementing read-write lock reentry, terminal and storage medium
CN109271258B (en) * 2018-08-28 2020-11-17 百度在线网络技术(北京)有限公司 Method, device, terminal and storage medium for realizing re-entry of read-write lock
CN109656730A (en) * 2018-12-20 2019-04-19 东软集团股份有限公司 A kind of method and apparatus of access cache
CN111459691A (en) * 2020-04-13 2020-07-28 中国人民银行清算总中心 Read-write method and device for shared memory
CN111597193A (en) * 2020-04-28 2020-08-28 广东亿迅科技有限公司 Method for locking and unlocking tree-shaped data
CN111597193B (en) * 2020-04-28 2023-09-26 广东亿迅科技有限公司 Tree data locking and unlocking method
CN111782609A (en) * 2020-05-22 2020-10-16 北京和瑞精准医学检验实验室有限公司 Method for rapidly and uniformly fragmenting fastq file
CN111782609B (en) * 2020-05-22 2023-10-13 北京和瑞精湛医学检验实验室有限公司 Method for rapidly and uniformly slicing fastq file
CN111913810A (en) * 2020-07-28 2020-11-10 北京百度网讯科技有限公司 Task execution method, device, equipment and storage medium under multi-thread scene
CN111913810B (en) * 2020-07-28 2024-03-19 阿波罗智能技术(北京)有限公司 Task execution method, device, equipment and storage medium in multithreading scene
CN112346879A (en) * 2020-11-06 2021-02-09 网易(杭州)网络有限公司 Process management method and device, computer equipment and storage medium
CN112346879B (en) * 2020-11-06 2023-08-11 网易(杭州)网络有限公司 Process management method, device, computer equipment and storage medium
CN113791916A (en) * 2021-11-17 2021-12-14 支付宝(杭州)信息技术有限公司 Object updating and reading method and device
CN113791916B (en) * 2021-11-17 2022-02-08 支付宝(杭州)信息技术有限公司 Object updating and reading method and device
CN115599575A (en) * 2022-09-09 2023-01-13 ***数智科技有限公司(Cn) Novel method for solving concurrent activation and deactivation of cluster logical volume
CN115599575B (en) * 2022-09-09 2024-04-16 ***数智科技有限公司 Novel method for solving concurrent activation and deactivation of cluster logical volumes

Also Published As

Publication number Publication date
CN104572568B (en) 2021-07-23
CN113835901A (en) 2021-12-24

Similar Documents

Publication Publication Date Title
CN104572568A (en) Read lock operation method, write lock operation method and system
US10223762B2 (en) Pipelined approach to fused kernels for optimization of machine learning workloads on graphical processing units
Agullo et al. QR factorization on a multicore node enhanced with multiple GPU accelerators
US9619430B2 (en) Active non-volatile memory post-processing
CN106164881A (en) Work in heterogeneous computing system is stolen
US20190146847A1 (en) Dynamic distributed resource management
US9448934B2 (en) Affinity group access to global data
US9420036B2 (en) Data-intensive computer architecture
US10176101B2 (en) Allocate a segment of a buffer to each of a plurality of threads to use for writing data
US20150212999A1 (en) Using parallel insert sub-ranges to insert into a column store
US9513923B2 (en) System and method for context migration across CPU threads
CN116243959A (en) Implementation of large-scale object version control and consistency
US20130138923A1 (en) Multithreaded data merging for multi-core processing unit
CN115686881A (en) Data processing method and device and computer equipment
US10896062B2 (en) Inter-process memory management
Kachris et al. An fpga-based integrated mapreduce accelerator platform
US20090313452A1 (en) Management of persistent memory in a multi-node computer system
US9947073B2 (en) Memory-aware matrix factorization
CN112955867A (en) Migration of partially completed instructions
US20230147878A1 (en) Implementing heterogeneous memory within a programming environment
Basso et al. Distributed asynchronous column generation
Kwack et al. HPCG and HPGMG benchmark tests on multiple program, multiple data (MPMD) mode on Blue Waters—A Cray XE6/XK7 hybrid system
CN112346879B (en) Process management method, device, computer equipment and storage medium
Zou et al. Supernodal sparse Cholesky factorization on graphics processing units
US9619153B2 (en) Increase memory scalability using table-specific memory cleanup

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20191213

Address after: P.O. Box 31119, grand exhibition hall, hibiscus street, 802 West Bay Road, Grand Cayman, Cayman Islands

Applicant after: Innovative advanced technology Co., Ltd

Address before: Greater Cayman, British Cayman Islands

Applicant before: Alibaba Group Holding Co., Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210208

Address after: 801-10, Section B, 8th floor, 556 Xixi Road, Xihu District, Hangzhou City, Zhejiang Province 310000

Applicant after: Ant financial (Hangzhou) Network Technology Co.,Ltd.

Address before: Ky1-1205 P.O. Box 31119, hibiscus street, 802 Sai Wan Road, Grand Cayman Islands, ky1-1205

Applicant before: Innovative advanced technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210908

Address after: 100020 unit 02, 901, floor 9, unit 1, building 1, No.1, East Third Ring Middle Road, Chaoyang District, Beijing

Patentee after: Beijing Aoxing Beisi Technology Co., Ltd

Address before: 801-10, Section B, 8th floor, 556 Xixi Road, Xihu District, Hangzhou City, Zhejiang Province 310000

Patentee before: Ant financial (Hangzhou) Network Technology Co.,Ltd.

TR01 Transfer of patent right