CN117931427A - Resource data processing method and system based on local memory lock and distributed lock - Google Patents

Resource data processing method and system based on local memory lock and distributed lock Download PDF

Info

Publication number
CN117931427A
CN117931427A CN202311815050.0A CN202311815050A CN117931427A CN 117931427 A CN117931427 A CN 117931427A CN 202311815050 A CN202311815050 A CN 202311815050A CN 117931427 A CN117931427 A CN 117931427A
Authority
CN
China
Prior art keywords
thread
competing
server
lock
distributed lock
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311815050.0A
Other languages
Chinese (zh)
Inventor
马韶华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Chezhijia Software Co ltd
Original Assignee
Tianjin Chezhijia Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Chezhijia Software Co ltd filed Critical Tianjin Chezhijia Software Co ltd
Priority to CN202311815050.0A priority Critical patent/CN117931427A/en
Publication of CN117931427A publication Critical patent/CN117931427A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a resource data processing method and system based on a local memory lock and a distributed lock. The method comprises the following steps: allocating a plurality of resource access requests to each server respectively, so that each server processes the plurality of resource access requests by using a plurality of threads respectively; for each server, responding to a request of a plurality of threads for acquiring local memory locks of the server, and determining one thread which successfully acquires the local memory locks in the plurality of threads as a competing thread of the server; responding to requests of competing threads of a plurality of servers for acquiring the distributed lock, and determining one competing thread which successfully acquires the distributed lock as a target thread; and responding to a processing request of the target thread for the resource data corresponding to the resource access request based on the distributed lock, and processing the resource data. The invention can ensure that only one thread can acquire the distributed lock at the same time so as to process resource data, and simultaneously avoid each thread which does not acquire the distributed lock from continuously retrying.

Description

Resource data processing method and system based on local memory lock and distributed lock
Technical Field
The invention relates to the technical field of Internet, in particular to a resource data processing method and a resource data processing system based on local memory locks and distributed locks.
Background
Currently, in the case of a traditional single application stand-alone deployment, java concurrency processing related APIs (such as ReentrantLock or Synchronized) can be used for mutex control. With the requirement of service development, the system deployed by the original single unit evolves into a distributed cluster system, and as the distributed system utilizes multiple threads and multiple processes to execute tasks and distributes the tasks on different machines, a concurrency control lock strategy under the condition of original single unit deployment is invalid, in order to solve the problem, a mutual exclusion mechanism crossing the JVM is needed to control access of shared resources, in order to ensure that one or a group of resources can only be executed by the same thread at the same time under the condition of high concurrency in distributed service, mutual exclusion is often needed to prevent mutual interference so as to ensure consistency, and therefore, a distributed lock is needed to ensure normal operation of the system under the condition.
In the prior art, by using SETNX commands of Redis, a plurality of threads on different machines at the same time can only allow one thread to acquire the lock and further process resources, but other threads which do not acquire the lock are continuously retried to acquire the lock, and the process of repeatedly acquiring the lock by a plurality of threads of a single machine can lead to the CPU to rise suddenly, so that the service performance is reduced.
Therefore, there is a need for a method for processing resource data based on local memory locks and distributed locks to solve the problems in the above-mentioned technical solutions.
Disclosure of Invention
Accordingly, the present invention provides a method and system for processing resource data based on local memory locks and distributed locks, so as to solve or at least alleviate the above problems.
According to one aspect of the present invention, there is provided a resource data processing method based on a local memory lock and a distributed lock, executed in a server cluster, the server cluster including a plurality of servers and a distributed lock, and each server having a local memory lock, the method comprising: allocating a plurality of resource access requests to each server respectively, so that each server processes the plurality of resource access requests by using a plurality of threads respectively; for each server, responding to a request of a plurality of threads for acquiring local memory locks of the server, and determining one thread successfully acquiring the local memory locks in the plurality of threads as a competing thread of the server; responding to requests of competing threads of a plurality of servers for acquiring the distributed locks, and determining one competing thread which successfully acquires the distributed locks from the competing threads as a target thread; and responding to a processing request of the target thread for the resource data corresponding to the resource access request based on the distributed lock, and processing the resource data in the server cluster.
Optionally, in the resource data processing method based on the local memory lock and the distributed lock according to the present invention, the method further includes: for each server, taking each thread which does not acquire the local memory lock as a waiting thread of the server; each waiting thread is added to a local waiting queue of the server so as to wait for the local memory lock of the server to be reacquired.
Optionally, in the resource data processing method based on the local memory lock and the distributed lock according to the present invention, in response to a request that a plurality of competing threads of the server compete for obtaining the distributed lock, determining, as a target thread, one competing thread that successfully obtains the distributed lock from the plurality of competing threads includes: responding to requests which are sent by a plurality of competing threads of the server and used for acquiring the distributed lock based on the Key of the distributed lock and UUID of the competing threads and used for calling a Redis Lock Util. Lock () method; if any competing thread successfully sets the Value corresponding to the Key of the distributed lock as the UUID of the competing thread, determining that the competing thread is the target thread which successfully acquires the distributed lock.
Optionally, in the resource data processing method based on the local memory lock and the distributed lock according to the present invention, the method further includes: responding to a monitoring request of each competing thread of the distributed lock for the resource data processing state of the target thread, and judging whether the monitoring time of each competing thread exceeds a preset waiting time or not; and if the monitoring time of each competing thread exceeds the preset waiting time and the target thread is not monitored to release the distributed lock, determining one competing thread successfully acquiring the distributed lock in each competing thread as the target thread in response to the request of each competing thread which does not acquire the distributed lock to compete for acquiring the distributed lock.
Optionally, in the resource data processing method based on the local memory lock and the distributed lock according to the present invention, each competing thread that does not acquire the distributed lock is adapted to create a listening Key for the target thread based on the UUID of the target thread, and the target thread is adapted to set a corresponding Value for the listening Key after releasing the distributed lock; and in response to a monitoring request of each competing thread of the distributed lock for the resource data processing state of the target thread not being acquired, judging whether the monitoring time of each competing thread exceeds a preset waiting time or not, wherein the method comprises the following steps: responding to the inquiry request of each competing thread for the Value corresponding to the monitoring Key, and judging whether the monitoring time of each competing thread exceeds the preset waiting time; and responding to the inquiry request of the Value corresponding to the monitoring Key again after inquiring that the Value corresponding to the monitoring Key is empty by each competing thread, and judging whether the monitoring time of each competing thread exceeds the preset waiting time.
Optionally, in the method for processing resource data based on local memory lock and distributed lock according to the present invention, the target thread is further adapted to delete the Value corresponding to the Key of the distributed lock after releasing the distributed lock, and release the local memory lock acquired by the target thread, so as to wake up each waiting thread in the local waiting queue of the server corresponding to the target thread.
Optionally, in the resource data processing method based on the local memory lock and the distributed lock according to the present invention, the method further includes: and if the target thread monitors that the target thread finishes the resource data processing and releases the distributed lock, and the monitoring time does not exceed the preset waiting time, the competing threads exit monitoring, and release the acquired local memory lock so as to wake up each waiting thread in the local waiting queue of the server corresponding to the competing thread.
Optionally, in the resource data processing method based on the local memory lock and the distributed lock according to the present invention, the responding to the request of the multiple threads to acquire the local memory lock of the server includes: inquiring corresponding resource data from the cache according to the resource access request; if the resource data is queried from the cache, directly returning the resource data; and if the resource data is not queried from the cache, responding to a request of a plurality of threads to acquire the local memory lock of the server.
Optionally, in the resource data processing method based on the local memory lock and the distributed lock according to the present invention, the server cluster includes k servers, where k is greater than 1, and k is an integer; allocating a plurality of resource access requests to each server respectively, including: in response to receiving the kn resource access requests, n resource access requests are respectively allocated to each of the servers, so that each of the servers processes the n resource access requests by using n threads respectively.
According to one aspect of the present invention there is provided a resource data processing system comprising: a server cluster comprising a plurality of servers and a distributed lock, each server having a local memory lock, and adapted to perform the method as described above to process resource data; and a plurality of clients, each client being adapted to send resource access requests to the server cluster.
According to one aspect of the invention, there is provided a computing device comprising: at least one processor; a memory storing program instructions, wherein the program instructions are configured to be adapted to be executed by the at least one processor, the program instructions comprising instructions for performing a local memory lock and distributed lock based resource data processing method as described above.
According to one aspect of the present invention, there is provided a readable storage medium storing program instructions that, when read and executed by a computing device, cause the computing device to perform a local memory lock and distributed lock based resource data processing method as described above.
According to the resource data processing method and system based on the local memory lock and the distributed lock, the server cluster comprises a plurality of servers and a distributed lock, each server is provided with the local memory lock, when a plurality of resource access requests are received, the plurality of resource access requests are respectively distributed for each server, one of the plurality of threads which successfully acquires the local memory lock is determined to be used as a competing thread of the server, one of the plurality of threads which successfully acquires the distributed lock is determined to be used as a target thread, and the only target thread can process resource data based on the distributed lock. Therefore, according to the technical scheme of the invention, only one competing thread in the multiple threads of any server can request to compete for obtaining the distributed lock, and only one competing thread in the multiple competing threads of the server can successfully obtain the distributed lock so as to process the resource data. Based on the method, it can be ensured that only one thread can acquire the distributed lock at the same time and further process resource data in all threads of a plurality of servers in the server cluster, and meanwhile, each thread which does not acquire the distributed lock can be prevented from continuously retrying to acquire the distributed lock, so that consumption of CPU resources can be reduced, service performance is improved, and the high-performance exclusive distributed lock is realized.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings, which set forth the various ways in which the principles disclosed herein may be practiced, and all aspects and equivalents thereof are intended to fall within the scope of the claimed subject matter. The above, as well as additional objects, features, and advantages of the present disclosure will become more apparent from the following detailed description when read in conjunction with the accompanying drawings. Like reference numerals generally refer to like parts or elements throughout the present disclosure.
FIG. 1 illustrates a schematic diagram of a resource data processing system 100, according to one embodiment of the invention;
FIG. 2 shows a schematic diagram of a computing device 200 according to one embodiment of the invention;
FIG. 3 illustrates a flow diagram of a method 300 for processing resource data based on local memory locks and distributed locks, according to one embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In order to facilitate understanding, terms related to the present invention are explained below.
Redis: is an open source log-type, key-Value high-performance database which is written and supported by ANSI C language, can be based on memory and can be persistent, and provides multiple language APIs.
UUID: full scale Universally Unique Identifier, a universally unique identification code. The UUID allows all elements in the distributed system to have unique identification information without the need for identification information assignment by the central control terminal. In this way, everyone can create a UUID that does not conflict with others.
The resource data processing method based on the local memory lock and the distributed lock provided by the embodiment of the invention can be implemented in a resource data processing system. The resource data processing system of the present invention is described below.
FIG. 1 shows a schematic diagram of a resource data processing system 100 according to one embodiment of the invention.
As shown in FIG. 1, resource data processing system 100 includes a server cluster 120, a plurality of clients 110. The server farm 120 includes a plurality of servers (server instances) and a distributed lock 125, each with a local memory lock. The server cluster 120 may be communicatively coupled to a plurality of clients 110, for example, may be network coupled by wired or wireless means, and each client 110 may send a resource access request to the server cluster 120.
Only a few clients 110 and a few servers are schematically illustrated in fig. 1, and it should be noted that the present invention is not limited to any particular number of servers included in server cluster 120, or to any particular number of clients 110 included in resource data processing system 100.
The client 110 may be a terminal used by a user, specifically, a personal computer such as a desktop computer or a notebook computer, or a mobile terminal such as a mobile phone, a tablet computer, a multimedia device, or an intelligent wearable device, but is not limited thereto. Client 110 may also be an application program residing in a terminal.
The respective servers may be implemented as any computing device capable of implementing data parsing and storing in the prior art, and the present invention is not limited to the specific kind of server. For example, the server may be implemented as a desktop computer, a notebook computer, a processor chip, a mobile phone, a tablet computer, etc., but is not limited thereto. The server may also be a service program residing in the computing device.
In an embodiment of the present invention, the server cluster 120 may allocate a plurality of resource access requests for each server, respectively, such that each server processes the plurality of resource access requests using a plurality of threads, respectively. It should be noted that the resource access request is an access request for resource data, including but not limited to an acquisition request, an update request, and a deletion request for resource data.
In some embodiments, when the server cluster 120 receives multiple resource access requests from multiple clients 110, the multiple resource access requests may be distributed to respective servers, which may utilize multiple threads to process the distributed multiple resource access requests.
In some embodiments, the server cluster 120 may include k servers, where k > 1, and k is an integer. As shown in fig. 1, the server cluster 120 includes, for example, a server a, a server B, and a server C … …. When the server cluster 120 receives kn resource access requests (resource access requests from the client 110), n resource access requests may be allocated to each server, respectively, in response to receiving the kn resource access requests, and each server may process the n resource access requests using n threads, respectively. For example, the n threads of the server a are the thread a1 and the thread a2 …, the n threads of the server B are the thread B1 and the thread B2 …, and the n threads of the server C are the thread C1 and the thread C2 …, respectively.
In an embodiment of the present invention, for each server, the server cluster 120 may determine, in response to a request of multiple threads of the server to acquire a local memory lock of the server, one of the multiple threads that successfully acquires the local memory lock as a competing thread of the server.
It should be noted that the competing thread of any one server is the only thread in the server that qualifies for a competing distributed lock.
In the server cluster 120 according to the present invention, each server has a local memory lock, and for each server in the server cluster 120, first, multiple threads of the server may request to acquire the local memory lock of the server, and only one thread of the multiple threads of the server may successfully acquire the local memory lock, and the thread may be used as a unique competing thread of the server, so as to compete with competing threads of other servers for the unique distributed lock 125 in the server cluster 120.
In this way, only one thread may be a competing thread at a time by any one server and competing with competing threads of other servers to obtain a unique distributed lock 125. That is, only one competing thread of the multiple threads of any one server may request to contend for acquisition of the distributed lock 125.
In some embodiments, each server also has a local wait queue, respectively, as shown in FIG. 1. For each server, the threads that do not acquire the local memory lock may be used as waiting threads for the server. And, each waiting thread may be added to the local waiting queue of the server, so that each waiting thread in the local waiting queue may wait in a loop to reacquire the local memory lock of the server.
In an embodiment of the present invention, the server cluster 120 may determine, in response to requests for acquiring the distributed lock 125 by competing threads of the plurality of servers, one competing thread of the plurality of competing threads that successfully acquires the distributed lock 125, and take the competing thread that successfully acquires the distributed lock 125 as a target thread. And processing corresponding resource data in the server cluster 120 in response to a processing request of the target thread for the resource data corresponding to the resource access request based on the distributed lock 125.
In an embodiment of the present invention, processing the resource data includes, but is not limited to: acquiring resource data, updating the resource data and deleting the resource data.
According to the resource data processing system 100 provided in the embodiment of the present invention, only one competing thread of the multiple threads of any server can request to compete to acquire the distributed lock, and only one competing thread of the multiple threads of the server can successfully acquire the distributed lock to process the resource data. Based on the method, it can be ensured that only one thread can acquire the distributed lock at the same time and further process resource data in all threads of a plurality of servers in the server cluster, and meanwhile, each thread which does not acquire the distributed lock can be prevented from continuously retrying to acquire the distributed lock, so that consumption of CPU resources can be reduced, service performance is improved, and the high-performance exclusive distributed lock is realized.
In an embodiment of the present invention, the resource data processing method based on local memory locks and distributed locks may be performed in a server cluster 120 of the resource data processing system 100. The method 300 for processing resource data based on local memory locks and distributed locks of the present invention will be described in detail below.
In some embodiments, the server cluster 120 of the present invention may be implemented as a computing device such that the local memory lock and distributed lock based resource data processing method 300 of the present invention may be performed in the computing device.
FIG. 2 shows a schematic diagram of a computing device 200 according to one embodiment of the invention. As shown in FIG. 2, in a basic configuration, computing device 200 includes at least one processing unit 202 and a system memory 204. According to one aspect, the processing unit 202 may be implemented as a processor, depending on the configuration and type of computing device. The system memory 204 includes, but is not limited to, volatile storage (e.g., random access memory), non-volatile storage (e.g., read only memory), flash memory, or any combination of such memories. According to one aspect, an operating system 205 is included in system memory 204.
According to one aspect, operating system 205 is suitable for controlling the operation of computing device 200, for example. Further, examples are practiced in connection with a graphics library, other operating systems, or any other application program and are not limited to any particular application or system. This basic configuration is illustrated in fig. 2 by those components within the dashed line. According to one aspect, computing device 200 has additional features or functionality. For example, according to one aspect, computing device 200 includes additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Such additional storage is illustrated in fig. 2 by removable storage device 209 and non-removable storage device 210.
As set forth hereinabove, according to one aspect, program modules 203 are stored in system memory 204. According to one aspect, program module 203 may include one or more applications, the invention is not limited to the type of application, for example, the application may include: email and contacts applications, word processing applications, spreadsheet applications, database applications, slide show applications, drawing or computer-aided application, web browser applications, etc.
In an embodiment according to the present invention, program module 203 includes a plurality of program instructions for performing the local memory lock and distributed lock based resource data processing method 300 of the present invention.
According to one aspect, the examples may be practiced in a circuit comprising discrete electronic components, a packaged or integrated electronic chip containing logic gates, a circuit utilizing a microprocessor, or on a single chip containing electronic components or a microprocessor. For example, examples may be practiced via a system on a chip (SOC) in which each or many of the components shown in fig. 2 may be integrated on a single integrated circuit. According to one aspect, such SOC devices may include one or more processing units, graphics units, communication units, system virtualization units, and various application functions, all of which are integrated (or "burned") onto a chip substrate as a single integrated circuit. When operating via an SOC, the functionality described herein may be operated via dedicated logic integrated with other components of computing device 200 on a single integrated circuit (chip). Embodiments of the invention may also be practiced using other techniques capable of performing logical operations (e.g., AND, OR, AND NOT), including but NOT limited to mechanical, optical, fluidic, AND quantum techniques. In addition, embodiments of the invention may be practiced within a general purpose computer or in any other circuit or system.
According to one aspect, the computing device 200 may also have one or more input devices 212, such as a keyboard, mouse, pen, voice input device, touch input device, and the like. Output device(s) 214 such as a display, speakers, printer, etc. may also be included. The foregoing devices are examples and other devices may also be used. Computing device 200 may include one or more communication connections 216 that allow communication with other computing devices 218. Examples of suitable communication connections 216 include, but are not limited to: RF transmitter, receiver and/or transceiver circuitry; universal Serial Bus (USB), parallel and/or serial ports.
The term computer readable media as used herein includes computer storage media. Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information (e.g., computer readable instructions, data structures, or program modules). System memory 204, removable storage 209, and non-removable storage 210 are all examples of computer storage media (i.e., memory storage). Computer storage media may include Random Access Memory (RAM), read Only Memory (ROM), electrically erasable read only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other article of manufacture that can be used to store information and that can be accessed by computing device 200. According to one aspect, any such computer storage media may be part of computing device 200. Computer storage media does not include a carrier wave or other propagated data signal.
According to one aspect, communication media is embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal (e.g., carrier wave or other transport mechanism) and includes any information delivery media. According to one aspect, the term "modulated data signal" describes a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio Frequency (RF), infrared, and other wireless media.
In an embodiment in accordance with the invention, computing device 200 is configured to perform a local memory lock and distributed lock based resource data processing method 300. Computing device 200 includes one or more processors and one or more readable storage media storing program instructions that, when configured to be executed by the one or more processors, cause the computing device to perform a local memory lock and distributed lock based resource data processing method 300 in an embodiment of the invention.
In some embodiments, the computing devices performing the local memory lock and distributed lock based resource data processing method 300 in embodiments of the present invention may be computing devices in the server cluster 120 such that the local memory lock and distributed lock based resource data processing method 300 may be performed in the server cluster 120.
In an embodiment of the present invention, the server cluster 120 includes a plurality of servers and a distributed lock 125, each server having a local memory lock. The server cluster 120 may be communicatively coupled to a plurality of clients 110.
The method 300 for processing resource data based on local memory lock and distributed lock in the embodiment of the present invention is described in detail below.
FIG. 3 illustrates a flow diagram of a method 300 for processing resource data based on local memory locks and distributed locks, according to one embodiment of the invention. As shown in FIG. 3, the local memory lock and distributed lock based resource data processing method 300 includes the following steps 310-340.
In step 310, a plurality of resource access requests are respectively assigned to each server in the server cluster 120 such that each server processes the plurality of resource access requests using a plurality of threads, respectively.
It should be noted that the resource access request is an access request for resource data, including but not limited to an acquisition request, an update request, and a deletion request for resource data.
In some embodiments, when the server cluster 120 receives multiple resource access requests from multiple clients 110, the multiple resource access requests may be distributed to respective servers, which may utilize multiple threads to process the distributed multiple resource access requests.
In some embodiments, the server cluster 120 includes k servers, where k > 1, and k is an integer. As shown in fig. 1, the server cluster 120 includes, for example, a server a, a server B, and a server C … …. When the server cluster 120 receives kn resource access requests (resource access requests from the client 110), n resource access requests may be allocated to each server, respectively, in response to receiving the kn resource access requests, and each server may process the n resource access requests using n threads, respectively. For example, the n threads of the server a are the thread a1 and the thread a2 …, the n threads of the server B are the thread B1 and the thread B2 …, and the n threads of the server C are the thread C1 and the thread C2 …, respectively.
In some embodiments, server cluster 120 has a cache, which may specifically be a Redis cache. After receiving the plurality of resource access requests, the server cluster 120 may query the cache for corresponding resource data according to the resource access requests. For example, the corresponding resource data may be queried from the cache based on the Key of the resource data in the cache. If resource data is queried from the cache (hits to the resource data), the resource data in the cache is returned directly to each client 110. If no resource data is queried from the cache (resource data is missed in the cache), the following steps 320-340 may be performed. Processing of resource data in the context of multiple resource access requests that are highly concurrent can be achieved by performing the following steps 320-340.
In step 320, for each server in the server cluster 120, in response to a request by a plurality of threads of the server to acquire a local memory lock of the server, a thread of the plurality of threads that successfully acquired the local memory lock is determined to be a competing thread of the server.
It should be noted that the competing thread of any one server is the only thread in the server that qualifies for a competing distributed lock.
In the server cluster 120 according to the present invention, each server has a local memory lock, and for each server in the server cluster 120, first, multiple threads of the server may request to acquire the local memory lock of the server, and only one thread of the multiple threads of the server may successfully acquire the local memory lock, and the thread may be used as a unique competing thread of the server, so as to compete with competing threads of other servers for the unique distributed lock 125 in the server cluster 120.
In this way, only one thread may be a competing thread at a time by any one server and competing with competing threads of other servers to obtain a unique distributed lock 125. That is, only one competing thread of the multiple threads of any one server may request to contend for acquisition of the distributed lock 125.
In addition, in some embodiments, each server also has a local wait queue, respectively. For each server, the threads that do not acquire the local memory lock may be used as waiting threads for the server. And, each waiting thread may be added to the local waiting queue of the server, so that each waiting thread in the local waiting queue may wait in a loop to reacquire the local memory lock of the server.
In step 330, in response to the requests of the plurality of servers to acquire the distributed lock 125 in a competing manner, one of the plurality of competing threads that successfully acquired the distributed lock 125 is determined and the competing thread that successfully acquired the distributed lock 125 is targeted.
It should be noted that, in the embodiment of the present invention, only one competing thread among the competing threads of the plurality of servers may successfully acquire the distributed lock 125. Only the target thread that successfully acquired the distributed lock 125 has the right to process the resource data can it be processed. That is, the target thread may request processing resource data (i.e., resource data corresponding to the resource access request) from the server cluster 120 based on the distributed lock 125.
In step 340, corresponding resource data in the server cluster 120 is processed in response to the processing request of the target thread for the resource data corresponding to the resource access request based on the distributed lock 125.
In an embodiment of the present invention, processing the resource data includes, but is not limited to: acquiring resource data, updating the resource data and deleting the resource data.
In some embodiments, the server cluster 120 also includes a data storage device, and the resource data processed in step 340 is data in the data storage device. That is, in step 340, the corresponding resource data in the data storage devices of the server cluster 120 may be processed in response to a processing request by the target thread for the resource data corresponding to the resource access request based on the distributed lock 125.
Thus, according to the resource data processing method 300 based on the local memory lock and the distributed lock provided by the present invention, among all the threads of the plurality of servers in the server cluster 120, only one thread (target thread) can acquire the distributed lock 125 at a time to process the resource, and specifically, request to process the resource data corresponding to the resource access request.
In some embodiments, distributed lock 125 may be a Redis distributed lock. Any competing thread is adapted to call the Redis LockUtil. Lock () method to request a contending acquire of the distributed lock 125 based on the Key of the distributed lock, the UUID of the competing thread. Here, it should be noted that any competing thread may generate a UUID of the competing thread, where the UUID may be used as a unique identifier of the competing thread, and the UUID values corresponding to the competing threads are different from each other. Key of the distributed lock and UUID of competing threads are used as parameters of a Redis LockUtil. Lock () method. In other embodiments, an expiration time parameter may also be entered.
In some embodiments, the competing thread may set the Value corresponding to the Key of the distributed lock to the UUID of the competing thread through the SETNX command of the Redis, which if set is successful, indicates that the competing thread successfully acquired the distributed lock 125.
In some embodiments, in step 330, in response to requests of the plurality of servers to contend for acquisition of the distributed lock 125, determining one of the plurality of contending threads to successfully acquire the distributed lock 125 as the target thread may be performed by:
the Redis LockUtil. Lock () method is invoked in response to the competing threads of the multiple servers sending a request to acquire the distributed lock 125 based on the Key of the distributed lock, the UUID of the competing threads.
If any competing thread successfully sets the Value corresponding to the Key of the distributed lock as the UUID of the competing thread (the distributed lock 125 is successfully acquired), the competing thread is determined to be the target thread for successfully acquiring the distributed lock 125. Meanwhile, other competing threads fail to set up, and no distributed lock 125 is acquired.
In some embodiments, for each competing thread that does not acquire the distributed lock 125, a request may be made to snoop the target thread's resource data processing status, including whether the snoop target thread is finished processing resource data, releasing the distributed lock 125.
As shown in fig. 3, the method 300 for processing resource data based on local memory lock and distributed lock of the present invention may further include the following steps:
in step 350, in response to not obtaining a snoop request of each competing thread of the distributed lock 125 for the resource data processing state of the target thread, it is determined whether the snoop time of each competing thread exceeds a predetermined wait time.
In step 360, if the snoop time of each competing thread that does not acquire the distributed lock 125 exceeds the predetermined wait time and the target thread is not snooped to release the distributed lock 125, then the process may return to step 330, where: in response to a request that each competing thread that does not acquire the distributed lock 125 contends for acquisition of the distributed lock 125, one of the competing threads (that does not acquire the distributed lock 125) that successfully acquired the distributed lock 125 is determined as the target thread. Step 340 is further performed: in response to the target thread processing the resource data corresponding to the resource access request based on the distributed lock 125, the resource data in the server cluster 120 is processed.
In addition, for each competing thread (that did not acquire the distributed lock 125), if the distributed lock 125 was not acquired yet this time, step 350 may be returned.
In some embodiments, each competing thread that does not acquire the distributed lock 125 may create a snoop Key for the target thread based on the UUID of the target thread. The target thread may set a corresponding Value for the snoop Key after releasing the distributed lock 125. Based on this, each competing thread that does not acquire the distributed lock 125 may determine whether the target thread releases the distributed lock 125 by querying whether the Value corresponding to the snoop Key is empty. If it is queried whether the Value corresponding to the snoop Key is null, it indicates that the target thread does not release the distributed lock 125. If the Value corresponding to the snoop Key is not found to be null, it indicates that the target thread has released the distributed lock 125.
In some embodiments, after each competing thread queries that the Value corresponding to the listening Key is null, it may query the Value corresponding to the listening Key again after sleeping for a predetermined interval time.
It should be noted that, each competing thread queries the Value corresponding to the listening Key each time, the server cluster 120 determines whether the listening time of each competing thread exceeds a predetermined waiting time. In one embodiment, the predetermined interval time is, for example, 10 milliseconds, but the present invention is not limited to a specific time of dormancy.
Accordingly, step 350 may be performed by: responding to the inquiry request of each competing thread for the Value corresponding to the monitoring Key, and judging whether the monitoring time of each competing thread exceeds the preset waiting time; and responding to the inquiry request of each competing thread for the Value corresponding to the monitoring Key after inquiring that the Value corresponding to the monitoring Key is empty, and judging whether the monitoring time of each competing thread exceeds the preset waiting time or not after dormancy for the preset interval time.
In some embodiments, after releasing the distributed lock 125, the target thread may delete the Value corresponding to the Key of the distributed lock (i.e., the UUID of the target thread) in addition to setting the corresponding Value for the snoop Key, and release the local memory lock acquired by the target thread, so as to wake up each waiting thread in the local waiting queue of the server corresponding to the target thread. Further, each waiting thread that wakes up may request that a local memory lock be acquired.
In some embodiments, if each competing thread listens to the target thread to complete the processing of the resource data, the distributed lock 125 has been released, and the listening time does not exceed the predetermined waiting time, it may be determined that the processing of the resource data is complete, each competing thread may exit the listening, and each competing thread releases the acquired local memory lock (while adding each competing thread to the local waiting queue of the corresponding server), respectively, so that each waiting thread in the local waiting queue of the server corresponding to each competing thread can be awakened.
In some embodiments, when each competing thread queries that the Value corresponding to the listening key is not null, it may be determined that the target thread completes the resource data processing and has released the distributed lock 125.
According to the resource data processing method 300 based on the local memory lock and the distributed lock provided by the invention, the server cluster comprises a plurality of servers and a distributed lock, each server is respectively provided with the local memory lock, when a plurality of resource access requests are received, a plurality of resource access requests are respectively distributed for each server, a thread which successfully acquires the local memory lock in the plurality of threads is determined to be used as a competing thread of the server in response to the request of the local memory lock of the server, a competing thread which successfully acquires the distributed lock in response to the request of the competing thread of the plurality of servers is determined to be used as a target thread, and then the unique target thread can process resource data based on the distributed lock. Therefore, according to the technical scheme of the invention, only one competing thread in the multiple threads of any server can request to compete for obtaining the distributed lock, and only one competing thread in the multiple competing threads of the server can successfully obtain the distributed lock so as to process the resource data. Based on the method, it can be ensured that only one thread can acquire the distributed lock at the same time and further process resource data in all threads of a plurality of servers in the server cluster, and meanwhile, each thread which does not acquire the distributed lock can be prevented from continuously retrying to acquire the distributed lock, so that consumption of CPU resources can be reduced, service performance is improved, and the high-performance exclusive distributed lock is realized.
In addition, the embodiment of the invention further comprises: the method of any of A1-A7, wherein responding to a request by a plurality of threads to acquire a local memory lock of the server comprises: inquiring corresponding resource data from the cache according to the resource access request; if the resource data is queried from the cache, directly returning the resource data; and if the resource data is not queried from the cache, responding to a request of a plurality of threads to acquire the local memory lock of the server. A9. the method of any of A1-A7, wherein the server cluster comprises k servers, wherein k > 1, and k is an integer; allocating a plurality of resource access requests to each server respectively, including: in response to receiving the kn resource access requests, n resource access requests are respectively allocated to each of the servers, so that each of the servers processes the n resource access requests by using n threads respectively.
The various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions of the methods and apparatus of the present invention, may take the form of program code (i.e., instructions) embodied in tangible media, such as removable hard drives, U-drives, floppy diskettes, CD-ROMs, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
In the case of program code execution on programmable computers, the mobile terminal will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to execute the local memory lock and distributed lock based resource data processing method of the present invention in accordance with instructions in the program code stored in the memory.
By way of example, and not limitation, readable media comprise readable storage media and communication media. The readable storage medium stores information such as computer readable instructions, data structures, program modules, or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of readable media.
In the description provided herein, algorithms and displays are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with examples of the invention. The required structure for a construction of such a system is apparent from the description above. In addition, the present invention is not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment, or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into a plurality of sub-modules.
Unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.

Claims (10)

1. A method for processing resource data based on local memory locks and distributed locks, the method being executed in a server cluster, the server cluster including a plurality of servers and a distributed lock, each server having a local memory lock, the method comprising:
allocating a plurality of resource access requests to each server respectively, so that each server processes the plurality of resource access requests by using a plurality of threads respectively;
For each server, responding to a request of a plurality of threads for acquiring local memory locks of the server, and determining one thread successfully acquiring the local memory locks in the plurality of threads as a competing thread of the server;
Responding to requests of competing threads of a plurality of servers for acquiring the distributed locks, and determining one competing thread which successfully acquires the distributed locks from the competing threads as a target thread;
and responding to a processing request of the target thread for the resource data corresponding to the resource access request based on the distributed lock, and processing the resource data in the server cluster.
2. The method of claim 1, further comprising:
for each server, taking each thread which does not acquire the local memory lock as a waiting thread of the server;
each waiting thread is added to a local waiting queue of the server so as to wait for the local memory lock of the server to be reacquired.
3. The method of claim 1 or 2, wherein determining, as the target thread, one of the competing threads that successfully acquired the distributed lock in response to requests of competing threads of the plurality of servers to acquire the distributed lock, comprises:
Responding to requests which are sent by a plurality of competing threads of the server and used for acquiring the distributed lock based on the Key of the distributed lock and UUID of the competing threads and used for calling a Redis Lock Util. Lock () method;
if any competing thread successfully sets the Value corresponding to the Key of the distributed lock as the UUID of the competing thread, determining that the competing thread is the target thread which successfully acquires the distributed lock.
4. A method according to any one of claims 1-3, further comprising:
responding to a monitoring request of each competing thread of the distributed lock for the resource data processing state of the target thread, and judging whether the monitoring time of each competing thread exceeds a preset waiting time or not;
And if the monitoring time of each competing thread exceeds the preset waiting time and the target thread is not monitored to release the distributed lock, determining one competing thread successfully acquiring the distributed lock in each competing thread as the target thread in response to the request of each competing thread which does not acquire the distributed lock to compete for acquiring the distributed lock.
5. The method of claim 4, wherein each competing thread that does not acquire the distributed lock is adapted to create a snoop Key for the target thread based on the UUID of the target thread, the target thread being adapted to set a corresponding Value for the snoop Key after releasing the distributed lock;
and in response to a monitoring request of each competing thread of the distributed lock for the resource data processing state of the target thread not being acquired, judging whether the monitoring time of each competing thread exceeds a preset waiting time or not, wherein the method comprises the following steps:
Responding to the inquiry request of each competing thread for the Value corresponding to the monitoring Key, and judging whether the monitoring time of each competing thread exceeds the preset waiting time;
And responding to the inquiry request of the Value corresponding to the monitoring Key again after inquiring that the Value corresponding to the monitoring Key is empty by each competing thread, and judging whether the monitoring time of each competing thread exceeds the preset waiting time.
6. The method of claim 5, wherein,
And the target thread is further suitable for deleting the Value corresponding to the Key of the distributed lock after releasing the distributed lock, and releasing the local memory lock acquired by the target thread so as to wake up each waiting thread in the local waiting queue of the server corresponding to the target thread.
7. The method of any of claims 4-6, further comprising:
And if the target thread monitors that the target thread finishes the resource data processing and releases the distributed lock, and the monitoring time does not exceed the preset waiting time, the competing threads exit monitoring, and release the acquired local memory lock so as to wake up each waiting thread in the local waiting queue of the server corresponding to the competing thread.
8. A resource data processing system, comprising:
A server cluster comprising a plurality of servers and a distributed lock, each server having a local memory lock, and adapted to perform the method of any of claims 1-7 to process resource data;
and a plurality of clients, each client being adapted to send resource access requests to the server cluster.
9. A computing device, comprising:
At least one processor; and
A memory storing program instructions, wherein the program instructions are configured to be adapted to be processed by the at least one processor, the program instructions comprising instructions for processing the method of any of claims 1-7.
10. A readable storage medium storing program instructions which, when read and processed by a computing device, cause the computing device to process the method of any of claims 1-7.
CN202311815050.0A 2023-12-26 2023-12-26 Resource data processing method and system based on local memory lock and distributed lock Pending CN117931427A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311815050.0A CN117931427A (en) 2023-12-26 2023-12-26 Resource data processing method and system based on local memory lock and distributed lock

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311815050.0A CN117931427A (en) 2023-12-26 2023-12-26 Resource data processing method and system based on local memory lock and distributed lock

Publications (1)

Publication Number Publication Date
CN117931427A true CN117931427A (en) 2024-04-26

Family

ID=90760227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311815050.0A Pending CN117931427A (en) 2023-12-26 2023-12-26 Resource data processing method and system based on local memory lock and distributed lock

Country Status (1)

Country Link
CN (1) CN117931427A (en)

Similar Documents

Publication Publication Date Title
JP6947723B2 (en) A method for efficient task scheduling in the presence of conflicts
CN109491928B (en) Cache control method, device, terminal and storage medium
US11106795B2 (en) Method and apparatus for updating shared data in a multi-core processor environment
US10503671B2 (en) Controlling access to a shared resource
JP2018534676A5 (en)
CN110188110B (en) Method and device for constructing distributed lock
US8769546B2 (en) Busy-wait time for threads
CN108459913B (en) Data parallel processing method and device and server
US8024739B2 (en) System for indicating and scheduling additional execution time based on determining whether the execution unit has yielded previously within a predetermined period of time
US20120158684A1 (en) Performance enhanced synchronization mechanism with intensity-oriented reader api
US20110099151A1 (en) Saving snapshot of a knowledge base without blocking
WO2018120810A1 (en) Method and system for solving data collision
US8495642B2 (en) Mechanism for priority inheritance for read/write locks
US11500693B2 (en) Distributed system for distributed lock management and method for operating the same
US9582340B2 (en) File lock
CN113010533B (en) Database access method, system, terminal and storage medium based on locking limitation
CN110866011A (en) Data table synchronization method and device, computer equipment and storage medium
CN113342554A (en) IO multiplexing method, medium, device and operating system
CN115543952A (en) Method and system for shared memory access API in distributed systems
US10114681B2 (en) Identifying enhanced synchronization operation outcomes to improve runtime operations
CN112286685A (en) Resource allocation method and device
JP7346649B2 (en) Synchronous control system and method
CN117931427A (en) Resource data processing method and system based on local memory lock and distributed lock
CN116547660A (en) Method and apparatus for distributed database transactions using global time stamps
EP1826671B1 (en) Interruptible thread synchronization method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination