CN117389755A - Multithreading memory sharing method and device - Google Patents

Multithreading memory sharing method and device Download PDF

Info

Publication number
CN117389755A
CN117389755A CN202311148980.5A CN202311148980A CN117389755A CN 117389755 A CN117389755 A CN 117389755A CN 202311148980 A CN202311148980 A CN 202311148980A CN 117389755 A CN117389755 A CN 117389755A
Authority
CN
China
Prior art keywords
shared memory
data
calculation result
thread
source data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311148980.5A
Other languages
Chinese (zh)
Inventor
何晓楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kaiwang Data Technology Co ltd
Original Assignee
Beijing Kaiwang Data Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kaiwang Data Technology Co ltd filed Critical Beijing Kaiwang Data Technology Co ltd
Priority to CN202311148980.5A priority Critical patent/CN117389755A/en
Publication of CN117389755A publication Critical patent/CN117389755A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)

Abstract

The invention provides a multithread shared memory method and a multithread shared memory device, wherein a main thread creates a shared memory and distributes the shared memory to a plurality of working threads; when the main thread is idle, trying to lock the shared memory; when the locking is successful, checking whether a readable calculation result exists in the shared memory; if yes, reading a calculation result, updating the state data of the shared memory into data without the calculation result, waking up a waiting working thread, and writing in a new calculation result; if not, checking whether writable source data exists in the shared memory; if the source data which can be written in exists, the source data is written in, the state data of the shared memory is updated into the active data, a waiting working thread is awakened, and new source data is read; unlocking the shared memory, and cycling the steps. The method can enable different lines Cheng Zhijie to share the memory space, avoid frequent data copying and synchronous operation, and reduce communication and synchronous overhead among threads.

Description

Multithreading memory sharing method and device
Technical Field
The present invention relates to the field of multithreading communications technologies, and in particular, to a method and apparatus for multithreading memory sharing.
Background
Traditional JavaScript runs in a single-threaded environment and cannot process large amounts of data concurrently. With the development of Web applications, the demand for real-time performance and high concurrency performance is increasing, and multithreading technology is an indispensable means.
In multithreading, the most common problem is how to achieve thread synchronization and collaboration. The traditional technical scheme needs to copy data among different threads, a main thread sends a message to a working thread through a postMessage method, and the working thread receives and processes the message through an onmessage method. Through the message queue, the worker thread can asynchronously execute JavaScript code, and returns a processing result to the main thread through the postMessage method when needed. The scheme avoids the potential safety hazard of sharing the memory among different threads, and simultaneously realizes asynchronous communication and data transfer among the threads. However, it is considered that when the existing solutions based on postMessage and onmessage methods synchronize data, the data needs to be serialized first and then synchronized to other threads through data copying. Therefore, when a large amount of data is processed, data serialization and copying are required to be frequently performed among threads, so that the CPU overhead is high, and performance bottlenecks are easy to occur.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and apparatus for multithreading shared memory, so as to eliminate or improve one or more drawbacks existing in the prior art, and solve the problem that when an existing multithreading processing scheme processes a large amount of data, data serialization and copying need to be frequently performed, so that CPU overhead is high, and performance bottlenecks easily occur.
In one aspect, the present invention provides a method for sharing a memory by multiple threads, wherein the method is executed on a main thread and multiple working threads, and includes the following steps:
the main thread creates a shared memory and distributes the shared memory to a plurality of working threads; the shared memory comprises lock data, condition variables, state data, source data and a calculation result;
in one cycle, locking the shared memory when the main thread does not process tasks;
when the locking is successful, checking whether a readable calculation result exists in the shared memory; if a readable calculation result exists, the calculation result is read, the state data of the shared memory is updated to be no calculation result data, and a waiting working thread is awakened to write in a new calculation result;
if no readable calculation result exists, checking whether writable source data exists in the shared memory; if the source data which can be written in exists, the source data is written in, the state data of the shared memory is updated into the active data, and a waiting working thread is awakened to read new source data;
unlocking the shared memory, and cycling the steps.
In some embodiments of the present invention, waking up a waiting worker thread to write new calculation results, further comprising:
after the working thread is awakened, writing in a new calculation result, updating the state data of the shared memory into data with the calculation result, awakening a thread waiting for reading the calculation result, and unlocking the shared memory.
In some embodiments of the present invention, waking up a waiting worker thread to read new source data further comprises:
after the working thread is awakened, new source data are read for calculation, the state data of the shared memory are updated into passive data, a thread waiting for writing the source data is awakened, and the shared memory is unlocked.
In some embodiments of the present invention, the main thread creates a shared memory and allocates the shared memory to a plurality of worker threads, wherein the method further comprises:
in one cycle, the working thread locks the shared memory;
when the locking is successful, checking whether readable source data exists in the shared memory; if the source data can be read, the source data is read, the state data of the shared memory is updated to passive data, a thread waiting for writing the source data is awakened, and the shared memory is unlocked;
calculating according to the source data;
after the calculation is completed, the working thread locks the shared memory;
when locking is successful, checking whether a writable calculation result exists in the shared memory; if a writable calculation result exists, writing the calculation result, updating the state data of the shared memory into calculation result data, waking up a thread waiting for reading the calculation result, and unlocking the shared memory;
the above steps are cycled.
In some embodiments of the present invention, checking whether there is readable source data in the shared memory; if there is no source data available for reading, it is suspended and waits to be awakened.
In some embodiments of the present invention, checking whether there is a writable calculation result in the shared memory; if no readable calculation result exists, the method is suspended and waits to be awakened.
In some embodiments of the present invention, a binary lock algorithm is used to implement locking and unlocking operations on the shared memory.
In another aspect, the present invention provides a multithreaded shared memory device comprising a processor and a memory, wherein the memory has stored therein computer instructions, the processor being operable to execute the computer instructions stored in the memory, the device implementing the steps of the multithreaded shared memory method as defined in any one of the preceding claims when the computer instructions are executed by the processor.
In another aspect, the invention also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the multithreaded shared memory method of any of the above-mentioned.
The invention has the advantages that:
the invention provides a multithread shared memory method and a multithread shared memory device, wherein a main thread creates a shared memory and distributes the shared memory to a plurality of working threads; when the main thread is idle, trying to lock the shared memory; when the locking is successful, checking whether a readable calculation result exists in the shared memory; if yes, reading a calculation result, updating the state data of the shared memory into data without the calculation result, waking up a waiting working thread, and writing in a new calculation result; if not, checking whether writable source data exists in the shared memory; if the source data which can be written in exists, the source data is written in, the state data of the shared memory is updated into the active data, a waiting working thread is awakened, and new source data is read; unlocking the shared memory, and cycling the steps. The multithread shared memory method provided by the invention can enable different threads Cheng Zhijie to share memory space, avoid frequent data copying and synchronous operation, reduce communication and synchronous overhead among threads, ensure synchronization and cooperation among threads by state locks and condition variables, avoid the problems of deadlock, competition conflict and the like, and is suitable for scenes with high requirements on processing a large amount of data and high concurrency or real-time performance.
Furthermore, the shared memory is realized based on the memory mapping mode of the operating system, and the multithreading is supported to concurrently read and write the memory, so that the performance bottleneck brought by a lock mechanism in the traditional linear data structure can be avoided, and the efficiency of multithreading for accessing data is improved. Meanwhile, the state lock and the condition variable can control the threads in a finer granularity, and the performance and concurrency of the multithreaded program are improved.
Furthermore, the shared memory can only be used by network working threads (Web workers), and the shared memory is independent of each other, so that the access safety of the threads to the shared memory can be ensured, the efficient concurrent processing capacity is ensured, and meanwhile, concurrent errors such as competition and the like can not occur, so that the shared memory among the threads is safer, and the problems of data abnormality, running errors and the like are avoided.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
It will be appreciated by those skilled in the art that the objects and advantages that can be achieved with the present invention are not limited to the above-described specific ones, and that the above and other objects that can be achieved with the present invention will be more clearly understood from the following detailed description.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate and together with the description serve to explain the invention. In the drawings:
FIG. 1 is a diagram illustrating steps of a method for multithreading shared memory according to an embodiment of the invention.
FIG. 2 is a flow chart of a method for multithreading shared memory according to an embodiment of the invention.
FIG. 3 is a diagram illustrating a partition of shared memory data according to an embodiment of the invention.
Detailed Description
The present invention will be described in further detail with reference to the following embodiments and the accompanying drawings, in order to make the objects, technical solutions and advantages of the present invention more apparent. The exemplary embodiments of the present invention and the descriptions thereof are used herein to explain the present invention, but are not intended to limit the invention.
It should be noted here that, in order to avoid obscuring the present invention due to unnecessary details, only structures and/or processing steps closely related to the solution according to the present invention are shown in the drawings, while other details not greatly related to the present invention are omitted.
It should be emphasized that the term "comprises/comprising" when used herein is taken to specify the presence of stated features, elements, steps or components, but does not preclude the presence or addition of one or more other features, elements, steps or components.
It is also noted herein that the term "coupled" may refer to not only a direct connection, but also an indirect connection in which an intermediate is present, unless otherwise specified.
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In the drawings, the same reference numerals represent the same or similar components, or the same or similar steps.
It should be emphasized that the references to steps below are not intended to limit the order of the steps, but rather should be understood to mean that the steps may be performed in a different order than in the embodiments, or that several steps may be performed simultaneously.
In order to solve the problem that the CPU overhead is high and the performance bottleneck is easy to occur because frequent data serialization and copying are required when the existing multithreading processing scheme processes a large amount of data, the invention provides a multithreading shared memory method, as shown in fig. 1, which comprises the following steps of S101-105:
step S101: the main thread creates a shared memory and allocates the shared memory to a plurality of worker threads. The shared memory comprises lock data, condition variables, state data, source data and calculation results.
Step S102: in one cycle, the shared memory is locked when the main thread is not processing tasks.
Step S103: when the locking is successful, checking whether a readable calculation result exists in the shared memory; if the readable calculation result exists, the calculation result is read, the state data of the shared memory is updated to be no calculation result data, and a waiting working thread is awakened to write in a new calculation result.
Step S104: if no readable calculation result exists, checking whether writable source data exists in the shared memory; if the source data which can be written in exists, the source data is written in, the state data of the shared memory is updated into the active data, and a waiting working thread is awakened to read the new source data.
Step S105: unlocking the shared memory, and cycling the steps.
As shown in FIG. 2, an overall flow chart of a method for multithreading shared memory is shown. The method is divided into a main thread flow and a working thread flow, and further described.
Main thread flow description:
in step S101, the main thread first creates a shared memory (SharedArrayBuffer), and allocates the SharedArrayBuffer to a plurality of worker threads. ShareArrayBuffer is a way to share memory, and can share memory space among multiple JavaScript threads. A large amount of data can be shared among a plurality of threads through the ShareArrayBuffer, so that frequent data copying operation and data synchronization overhead among threads in a traditional mode are avoided, the data transmission efficiency is improved, and meanwhile, the execution speed of a JavaScript engine can be accelerated.
As shown in fig. 3, the SharedArrayBuffer data partition is, in order from left to right, lock data, condition variables, state data, source data section, and calculation result section, specifically:
lock data: for controlling access to shared resources to avoid contention conflicts and deadlock problems.
In some embodiments, the lock and unlock operations are performed on the shared memory using a binary lock algorithm, i.e., when the lock is occupied, all threads attempting to acquire the lock are blocked until the lock is released.
Condition variable: for coordinating synchronization among multiple threads, when a certain condition is reached, wait is suspended until other threads notify that the condition has been met. Examples of conditions for suspension waiting are described in the flow chart below.
Status data: the method is used for managing the state of data in the shared memory so as to ensure the correctness of concurrent execution in a multi-thread environment. The state data uses a read-write separation strategy to divide the state into four types: the method comprises the steps of no data, source data, calculation result data, source data and calculation result data, and providing a method for reading and writing the source data and the calculation result data, and can check whether the data exists. Meanwhile, shareArrayBuffer and atom objects in JavaScript are used in the implementation. The atom is a built-in object, provides a group of atomic operation methods for synchronously accessing and operating the shared memory, and uses the atom and the ShareArrayBuffer in a matched manner to realize data synchronization and collaboration among a plurality of concurrent threads, so that memory read-write operation becomes safe and efficient in a multithreading environment. The atomic operation means that a plurality of threads sharing the memory can read and write data in the same position at the same time. An atomic operation will ensure that the value of the data being read or written is expected, i.e., the next atomic operation must begin after the last atomic operation has ended, without interruption of the operation.
Source data section: and the data is written by the main thread, and the working thread reads and calculates.
Calculating a result section: and writing the calculation result data of the working thread after the calculation is completed.
In step S102, the main thread is used for processing the interaction event of the user and executing other tasks, when the main thread does not process the tasks, i.e. idle time, the locking of the shared memory is attempted, if the locking fails, the next idle time is waited, and only one thread can access the shared memory at the same time, so as to avoid mutual interference and conflict between different threads. Meanwhile, the main thread is checked in idle time of the main thread, so that the blocking of the main thread is avoided, and smooth interaction experience of a user is ensured. If the locking is successful, the process proceeds to step S103.
In step S103, after the locking is successful, it is checked whether there is a readable calculation result in the shared memory. If the readable calculation result exists, the calculation result is read, the state data of the shared memory is updated to the no calculation result, then a waiting sub-thread (working thread) is awakened, new calculation result data is written, and the main thread continues to execute step S105 to unlock the shared memory. If there is no readable calculation result, the process proceeds to step S104.
In some embodiments, a waiting working thread is awakened, a new calculation result is written, the working thread updates the state data of the shared memory to have calculation result data, and a thread (usually a main thread) waiting for reading the calculation result is awakened and unlocked from the shared memory.
In step S104, if there is no readable calculation result, it is checked whether there is writable source data in the shared memory. If there is source data that can be written, the source data is written, the state data of the shared memory is updated to the active data, then a waiting sub-thread (working thread) is awakened, the new source data is read, and the main thread continues to execute step S105 to unlock the shared memory.
In some embodiments, a waiting worker thread is awakened, after reading new source data, the worker thread updates the shared memory's state data to passive data, wakes up a thread (typically the main thread) waiting to write source data, and unlocks the shared memory.
In step S105, the main thread unlocks the shared memory, and releases the modification authority of the shared memory, so that other working threads can be locked for reading and writing operations.
Step S102 to step S105 are a complete cycle operation, after the step S105 is completed, step S102 is re-executed, and the main thread tries to lock the shared memory again to perform the next round of operation.
Work thread flow description:
in some embodiments, in step S101, the main thread creates a shared memory, and after the shared memory is allocated to a plurality of working threads, the working threads can cycle to read and write the shared memory, including steps S201 to S205:
step S201: in one cycle, the worker thread locks the shared memory.
Step S202: when the locking is successful, whether readable source data exists in the shared memory is checked. If the source data can be read, the source data is read, the state data of the shared memory is updated to passive data, a thread waiting for writing the source data is awakened, and the shared memory is unlocked.
Step S203: the calculation is performed based on the source data.
Step S204: after the calculation is completed, the working thread locks the shared memory.
Step S205: and when the locking is successful, checking whether a writable calculation result exists in the shared memory. If the result is readable, writing the result, updating the state data of the shared memory into the data with the result, waking up a thread waiting for reading the result, and unlocking the shared memory.
In step S201, the worker thread attempts to lock the shared memory, and if the locking fails, the worker thread loops until the unlocking is successful, so as to avoid the problems of deadlock and contention conflict.
In step S202, when the locking is successful, it is checked whether there is readable source data in the shared memory. If the source data can be read, the source data is read, and the state data of the shared memory is updated to be passive data, so that the overhead of data transmission and copying is avoided. Then wake up a thread (typically the main thread) waiting to write the source data and unlock the shared memory.
In some implementations, if there is no source data available to read, then suspend waiting to be awakened. Wherein suspending (Suspend) refers to temporarily stopping execution of a process or thread, leaving it inactive until it is awakened
In step S203, calculation is performed according to the source data read in step S202 until the calculation is completed. The calculation is performed in the independent working thread, so that the main thread is prevented from being blocked by a large amount of calculation tasks, and the use smoothness of a user can be ensured.
In step S204, after the calculation is completed, the working thread tries to lock the shared memory in accordance with step S201, and if the locking fails, the working thread loops until the unlocking is successful, so as to avoid the problems of deadlock and contention conflict.
In step S205, when the locking is successful, it is checked whether there is a writable calculation result in the shared memory. If the writable calculation result exists, the calculation result is written, the state data of the shared memory is updated to the data with the calculation result, a thread (usually a main thread) waiting for reading the calculation result is awakened, and the shared memory is unlocked.
In some embodiments, if there is no readable calculation result, it is suspended and waits to be awakened.
Steps S201 to S205 are one complete cycle operation of the worker thread, and the worker thread performs steps S201 to S205 in a cycle.
The present invention also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of a method of multithreading shared memory.
Correspondingly, the invention also provides a device comprising a computer apparatus, the computer apparatus comprising a processor and a memory, the memory having stored therein computer instructions for executing the computer instructions stored in the memory, the apparatus implementing the steps of the method as described above when the computer instructions are executed by the processor.
The embodiments of the present invention also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the edge computing server deployment method described above. The computer readable storage medium may be a tangible storage medium such as Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, floppy disks, hard disk, a removable memory disk, a CD-ROM, or any other form of storage medium known in the art.
In summary, the present invention provides a method and apparatus for multithreading shared memory, where a main thread creates a shared memory and allocates the shared memory to a plurality of working threads; when the main thread is idle, trying to lock the shared memory; when the locking is successful, checking whether a readable calculation result exists in the shared memory; if yes, reading a calculation result, updating the state data of the shared memory into data without the calculation result, waking up a waiting working thread, and writing in a new calculation result; if not, checking whether writable source data exists in the shared memory; if the source data which can be written in exists, the source data is written in, the state data of the shared memory is updated into the active data, a waiting working thread is awakened, and new source data is read; unlocking the shared memory, and cycling the steps. The multithread shared memory method provided by the invention can enable different threads Cheng Zhijie to share memory space, avoid frequent data copying and synchronous operation, reduce communication and synchronous overhead among threads, ensure synchronization and cooperation among threads by state locks and condition variables, avoid the problems of deadlock, competition conflict and the like, and is suitable for scenes with high requirements on processing a large amount of data and high concurrency or real-time performance.
Furthermore, the shared memory is realized based on the memory mapping mode of the operating system, and the multithreading is supported to concurrently read and write the memory, so that the performance bottleneck brought by a lock mechanism in the traditional linear data structure can be avoided, and the efficiency of multithreading for accessing data is improved. Meanwhile, the state lock and the condition variable can control the threads in a finer granularity, and the performance and concurrency of the multithreaded program are improved.
Furthermore, the shared memory can only be used by network working threads (Web workers), and the shared memory is independent of each other, so that the access safety of the threads to the shared memory can be ensured, the efficient concurrent processing capacity is ensured, and meanwhile, concurrent errors such as competition and the like can not occur, so that the shared memory among the threads is safer, and the problems of data abnormality, running errors and the like are avoided.
Those of ordinary skill in the art will appreciate that the various illustrative components, systems, and methods described in connection with the embodiments disclosed herein can be implemented as hardware, software, or a combination of both. The particular implementation is hardware or software dependent on the specific application of the solution and the design constraints. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave.
It should be understood that the invention is not limited to the particular arrangements and instrumentality described above and shown in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and shown, and those skilled in the art can make various changes, modifications and additions, or change the order between steps, after appreciating the spirit of the present invention.
In this disclosure, features that are described and/or illustrated with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments and/or in combination with or instead of the features of the other embodiments.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, and various modifications and variations can be made to the embodiments of the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A method for multithreading to share memory, the method executing on a main thread and a plurality of worker threads, comprising the steps of:
the main thread creates a shared memory and distributes the shared memory to a plurality of working threads; the shared memory comprises lock data, condition variables, state data, source data and a calculation result;
in one cycle, locking the shared memory when the main thread does not process tasks;
when the locking is successful, checking whether a readable calculation result exists in the shared memory; if a readable calculation result exists, the calculation result is read, the state data of the shared memory is updated to be no calculation result data, and a waiting working thread is awakened to write in a new calculation result;
if no readable calculation result exists, checking whether writable source data exists in the shared memory; if the source data which can be written in exists, the source data is written in, the state data of the shared memory is updated into the active data, and a waiting working thread is awakened to read new source data;
unlocking the shared memory, and cycling the steps.
2. The method of claim 1, wherein waking up a waiting worker thread to write new computation results, further comprises:
after the working thread is awakened, writing in a new calculation result, updating the state data of the shared memory into data with the calculation result, awakening a thread waiting for reading the calculation result, and unlocking the shared memory.
3. The method of claim 1, wherein waking up a waiting worker thread to read new source data, further comprises:
after the working thread is awakened, new source data are read for calculation, the state data of the shared memory are updated into passive data, a thread waiting for writing the source data is awakened, and the shared memory is unlocked.
4. The multithreaded shared memory method of claim 1, wherein the main thread creates a shared memory and allocates the shared memory to a plurality of worker threads, wherein the method further comprises:
in one cycle, the working thread locks the shared memory;
when the locking is successful, checking whether readable source data exists in the shared memory; if the source data can be read, the source data is read, the state data of the shared memory is updated to passive data, a thread waiting for writing the source data is awakened, and the shared memory is unlocked;
calculating according to the source data;
after the calculation is completed, the working thread locks the shared memory;
when locking is successful, checking whether a writable calculation result exists in the shared memory; if a writable calculation result exists, writing the calculation result, updating the state data of the shared memory into calculation result data, waking up a thread waiting for reading the calculation result, and unlocking the shared memory;
the above steps are cycled.
5. The method of claim 4, wherein checking whether there is readable source data in the shared memory; if there is no source data available for reading, it is suspended and waits to be awakened.
6. The method of claim 4, wherein checking whether there are writable computations in the shared memory; if no readable calculation result exists, the method is suspended and waits to be awakened.
7. The method of claim 1, wherein the locking and unlocking operations are performed on the shared memory using a binary lock algorithm.
8. A multithreaded shared memory apparatus comprising a processor and a memory, wherein the memory has stored therein computer instructions for executing the computer instructions stored in the memory, which when executed by the processor, implement the steps of the method of any one of claims 1 to 7.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the steps of the method according to any one of claims 1 to 7.
CN202311148980.5A 2023-09-06 2023-09-06 Multithreading memory sharing method and device Pending CN117389755A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311148980.5A CN117389755A (en) 2023-09-06 2023-09-06 Multithreading memory sharing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311148980.5A CN117389755A (en) 2023-09-06 2023-09-06 Multithreading memory sharing method and device

Publications (1)

Publication Number Publication Date
CN117389755A true CN117389755A (en) 2024-01-12

Family

ID=89436211

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311148980.5A Pending CN117389755A (en) 2023-09-06 2023-09-06 Multithreading memory sharing method and device

Country Status (1)

Country Link
CN (1) CN117389755A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117971137A (en) * 2024-04-02 2024-05-03 山东海润数聚科技有限公司 Multithreading-based large-scale vector data consistency assessment method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117971137A (en) * 2024-04-02 2024-05-03 山东海润数聚科技有限公司 Multithreading-based large-scale vector data consistency assessment method and system
CN117971137B (en) * 2024-04-02 2024-06-04 山东海润数聚科技有限公司 Multithreading-based large-scale vector data consistency assessment method and system

Similar Documents

Publication Publication Date Title
US8239871B2 (en) Managing timeout in a multithreaded system by instantiating a timer object having scheduled expiration time and set of timeout handling information
US8973004B2 (en) Transactional locking with read-write locks in transactional memory systems
US8099538B2 (en) Increasing functionality of a reader-writer lock
US7962923B2 (en) System and method for generating a lock-free dual queue
US8458721B2 (en) System and method for implementing hierarchical queue-based locks using flat combining
US20100332770A1 (en) Concurrency Control Using Slotted Read-Write Locks
US8302105B2 (en) Bulk synchronization in transactional memory systems
US20150286586A1 (en) System and Method for Implementing Scalable Adaptive Reader-Writer Locks
US20110125973A1 (en) System and Method for Performing Dynamic Mixed Mode Read Validation In a Software Transactional Memory
US6601120B1 (en) System, method and computer program product for implementing scalable multi-reader/single-writer locks
US20210255889A1 (en) Hardware Transactional Memory-Assisted Flat Combining
US20020138706A1 (en) Reader-writer lock method and system
US20020029239A1 (en) Method and system for enhanced concurrency in a computing environment
CN113835901A (en) Read lock operation method, write lock operation method and system
CN117389755A (en) Multithreading memory sharing method and device
McKenney Selecting locking primitives for parallel programming
US20090271793A1 (en) Mechanism for priority inheritance for read/write locks
CN102929711A (en) Implementing method of real-time transactional memory of software
CN117112244A (en) Asymmetric STM synchronization method for mixed real-time task set
JP7346649B2 (en) Synchronous control system and method
Umatani et al. Pursuing laziness for efficient implementation of modern multithreaded languages
Nguyen et al. Make plor real-time and fairly decentralized
US20190042332A1 (en) Hardware locking primitive system for hardware and methods for generating same
Nguyen et al. Fairly decentralizing a hybrid concurrency control protocol for real-time database systems
JPH01297760A (en) System for lock control and task control in multiprocessor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination