CN106790694B - Distributed system and scheduling method of target object in distributed system - Google Patents

Distributed system and scheduling method of target object in distributed system Download PDF

Info

Publication number
CN106790694B
CN106790694B CN201710092178.7A CN201710092178A CN106790694B CN 106790694 B CN106790694 B CN 106790694B CN 201710092178 A CN201710092178 A CN 201710092178A CN 106790694 B CN106790694 B CN 106790694B
Authority
CN
China
Prior art keywords
lock
thread
target object
request
specific thread
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710092178.7A
Other languages
Chinese (zh)
Other versions
CN106790694A (en
Inventor
吴彰合
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Guangzhou UCWeb Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou UCWeb Computer Technology Co Ltd filed Critical Guangzhou UCWeb Computer Technology Co Ltd
Priority to CN201710092178.7A priority Critical patent/CN106790694B/en
Publication of CN106790694A publication Critical patent/CN106790694A/en
Application granted granted Critical
Publication of CN106790694B publication Critical patent/CN106790694B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/34Network arrangements or protocols for supporting network services or applications involving the movement of software or configuration parameters 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a distributed system and a scheduling method of a target object in the distributed system. The scheduling method comprises the following steps: creating a lock resource object corresponding to the target object, the lock resource object including lock information associated with the target object; and in response to receiving a lock request sent by the specific thread, judging whether the specific thread can currently acquire a lock corresponding to the target object according to the lock information of the lock resource object of the target object aimed at by the lock request. Therefore, the lock requests of different specific threads in the distributed environment can be mapped to the lock requests under the management of the same JVM, and a plurality of lock requests in the distributed environment can be managed and scheduled in a unified mode.

Description

Distributed system and scheduling method of target object in distributed system
Technical Field
The present invention relates to the field of distributed technologies, and in particular, to a distributed system and a method for scheduling a target object in the distributed system.
Background
A distributed lock is one way to achieve synchronous access to shared resources in a distributed environment. If one or a group of resources are shared among different distributed systems or different hosts in the same distributed system, certain measures are required to prevent interference with each other when accessing the resources so as to ensure access consistency, and in this case, a distributed lock is used.
Most existing implementations of distributed locks simply lock a service token (usually a string representing a service process), during which other logic in the distributed environment will be blocked when trying to lock the same service token, and will not have an opportunity to enter the lock until the lock is released by the thread that first locked the service token.
This solution has the following problems: 1) are all exclusive locks, and the service throughput rate is low when the locks are used; 2) the lock is not supported to be reentrant, the design structure of the service code is influenced, and the code cannot be planned more elegantly and flexibly; 3) the problems of single point fault timeout and the like of machines locking certain service marks in the cluster cannot be systematically identified and processed, and other machines can be easily influenced by a single machine in the cluster due to the problem, so that the reliability and the robustness are poor; 4) deadlock can not be effectively and timely identified, insecurity of business codes is increased, and problem troubleshooting is difficult.
Thus, there is a need for a new scheduling scheme for locks in a distributed environment to address at least one of the problems described above.
Disclosure of Invention
The present invention is directed to a distributed system and a method for scheduling a target object in the distributed system, so as to solve at least one of the above problems.
According to an aspect of the present invention, a method for scheduling a target object in a distributed system is provided, including: creating a lock resource object corresponding to the target object, the lock resource object including lock information associated with the target object; and in response to receiving a lock request sent by the specific thread, judging whether the specific thread can currently acquire a lock corresponding to the target object according to the lock information of the lock resource object of the target object aimed at by the lock request.
Therefore, the lock requests of different specific threads in the distributed environment can be mapped to the lock requests under the management of the same JVM, and a plurality of lock requests in the distributed environment can be managed and scheduled in a unified mode.
Preferably, the lock information may include: the object ID of the target object, the first thread holding the target object, the type and time of the lock the first thread holds for the target object, and/or the lock request may include: the object ID of the target object for which the lock request is directed, the type of lock requested, the time of request.
Preferably, the lock information may further include: wait for a second thread of the target object, a type and time of a lock requested by the second thread for the target object.
Preferably, the step of determining whether the particular thread can currently acquire the lock corresponding to the target object may include: locking a lock resource object corresponding to the lock request; and entering a critical zone to access the lock resource object, and judging whether the specific thread can acquire the lock currently according to the lock information and a preset lock contention logic.
Thus, for a lock request of a particular thread, the lock resource object corresponding to the lock request can be locked, and whether or not to allow the lock resource object to acquire the target object targeted by the lock request currently can be determined by using the critical section of the lock resource object.
Preferably, the predetermined lock contention logic may comprise: allowing a plurality of different threads to respectively acquire read locks for the same target object at the same time; in the case where one thread holds a write lock for one target object, no other thread can hold read and write locks for one target object. Whereby the predetermined lock contention logic may be simulated JVM thread lock contention logic.
Preferably, the scheduling method may further include: under the condition that the specific thread cannot acquire the lock currently, exiting the critical zone, and recording the lock request in the lock resource object; and/or recording the type and time of the lock acquired by the specific thread in the lock resource object and exiting the critical section under the condition that the specific thread is judged to be capable of acquiring the lock currently.
Thus, in order to increase the processing efficiency of lock requests, only necessary processing operations may be performed after entering the critical section of the lock resource object, and unnecessary operations may be performed after exiting the critical section.
Preferably, exiting the critical section and recording the lock request in the lock resource object upon determining that the particular thread cannot acquire the lock may include: waiting for release of the lock after exiting the critical section; when the lock is not released after waiting for the preset time, establishing a new thread to continue waiting for the release of the lock, and caching a waiting message of the lock request into the lock resource object.
Therefore, the released information of the lock can be timely identified based on the established new thread, so that the corresponding thread can timely obtain the lock.
Preferably, the scheduling method may further include: when the specific thread is judged not to be capable of acquiring the lock of the target object currently, a lock waiting message is sent to the specific thread, so that the specific thread starts a spin waiting mode to wait for the lock; and/or upon determining that the particular thread is currently able to acquire the lock of the target object, sending the lock to the particular thread for the particular thread to perform processing of subsequent business logic.
Preferably, the scheduling method may further include: clearing invalid lock information in the lock resource object; and/or notify the client of the lock information of the forced outage and/or the lock information marked as invalid.
Therefore, the lock data in the lock resource object can be maintained, so that the stored lock data are all valid data, and the influence of various exceptions and faults in a distributed environment on scheduling is avoided.
Preferably, the scheduling method may further include: recording a target object aimed at by the received lock request and a specific thread sending the lock request; and when the specific thread corresponding to the received lock request is the same as the previously recorded specific thread and the aimed target object is the same as the previously recorded target object corresponding to the lock request, allowing the specific thread to acquire the lock of the requested target object.
Therefore, after a certain thread acquires the lock for the target object, the thread can be allowed to directly acquire the requested lock when issuing the lock for the target object again, namely, the invention also supports the lock re-entry request.
Preferably, the particular thread that sent the lock request may be logged according to the machine IP and/or application port and/or the particular thread ID at the time the particular thread sent the lock request.
Preferably, the scheduling method may further include: judging whether the lock request can cause deadlock based on the lock information in the lock resource object in the distributed system; upon determining that the lock request may cause a deadlock, it is determined that the particular thread is not currently able to acquire the lock.
Preferably, the step of determining whether the lock request will cause a deadlock may comprise: constructing a mesh graph by taking the thread and the target object as vertexes, wherein the thread is connected with the target object held by the thread by a first connecting line, and the thread is connected with the target object waiting by the thread by a second connecting line; and in the case that the specific thread, other threads and the target object form a closed ring, and the interval between the first wire and the second wire in the closed ring is generated, judging that the lock request can cause deadlock.
According to another aspect of the present invention, there is also provided a distributed system, including: the method comprises a scheduling node and at least one service node for running threads, wherein the threads running on the service node can send lock requests to the scheduling node through the service node, the scheduling node can judge whether a specific thread can obtain a lock corresponding to a target object currently according to a lock resource object of the target object to which the lock requests aim at in response to receiving the lock requests sent by a specific thread, and the scheduling node creates the lock resource object corresponding to the target object under the condition that the lock resource object of the target object to which the lock requests aim at is not established, wherein the lock resource object comprises lock information related to the target object.
Preferably, the lock information may include: the object ID of the target object, the first thread holding the target object, the type and time of the lock the first thread holds for the target object, and/or the lock request may include: object ID, type of lock requested, time of request.
Preferably, the lock information may further include: wait for a second thread of the target object, a type and time of a lock requested by the second thread for the target object.
Preferably, the scheduling node locks the lock resource object corresponding to the lock request, enters the critical section to access the lock resource object, and determines whether the specific thread can currently acquire the lock according to the lock information and a predetermined lock contention logic.
Preferably, the predetermined lock contention logic may comprise: allowing a plurality of different threads to respectively acquire read locks for the same target object at the same time; in the case where one thread holds a write lock for one target object, no other thread can hold read and write locks for one target object.
Preferably, in the event that it is determined that the particular thread is currently unable to acquire the lock, the scheduling node exits the critical section and records the lock request in the lock resource object, and/or in the event that it is determined that the particular thread is currently able to acquire the lock, the scheduling node records the type and time of the lock acquired by the particular thread in the lock resource object and exits the critical section.
Preferably, the scheduling node waits for the release of the lock after exiting the critical section, and when the lock is not released after waiting for more than a predetermined time, the scheduling node establishes a new thread to continue waiting for the release of the lock and caches a waiting message of the lock request in the lock resource object.
Preferably, when it is determined that the specific thread cannot currently acquire the lock of the target object, the scheduling node sends a lock waiting message to the specific thread, so that the specific thread starts a spin waiting mode to wait for the lock; and/or when the specific thread is judged to be capable of acquiring the lock of the target object currently, the scheduling node sends the lock to the specific thread so that the specific thread can execute the processing of the subsequent business logic.
Preferably, the scheduling node may further determine whether the lock request may cause a deadlock based on lock information in all lock resource objects in the distributed environment, and when it is determined that the lock request may cause a deadlock, the scheduling node determines that the specific thread cannot currently acquire the lock.
Preferably, the scheduling node may construct a mesh graph with the threads and the target objects as vertices, wherein the threads and the target objects held by the threads are connected by a first connection line, the threads and the target objects waiting by the threads are connected by a second connection line, and when a certain thread forms a closed loop with other threads and target objects, and the first connection line and the second connection line in the closed loop are spaced, the scheduling node determines that the lock request may cause deadlock.
Preferably, the service node sending the lock request may act as a scheduling node, and the created lock resource object is stored in the Redis database.
According to the distributed system and the scheduling system of the target object in the distributed system, the lock resource object of each target object in the distributed environment is established, and for the received lock request sent by the specific thread, whether the specific thread is allowed to acquire the lock corresponding to the target object can be judged according to the lock information in the lock resource object of the target object aiming at the lock request. Therefore, the invention can map the lock requests of different specific threads in the distributed environment into the lock requests managed by the same JVM, thereby uniformly managing and scheduling a plurality of lock requests in the distributed environment.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in greater detail exemplary embodiments thereof with reference to the attached drawings, in which like reference numerals generally represent like parts throughout.
Fig. 1 shows a schematic flow chart of a scheduling method of a target object in a distributed system according to an embodiment of the present invention.
FIG. 2 is a flowchart illustrating a process for determining whether a particular thread can currently acquire a lock corresponding to a target object based on a lock resource object.
FIG. 3 illustrates a resource dependency path diagram that can cause deadlocks.
FIG. 4 shows a functional block diagram of a distributed system according to an embodiment of the present invention.
Fig. 5 shows a process flow of a scheduling node and a service node in a distributed system.
Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Aiming at the defects of the existing implementation scheme of the distributed lock, the invention provides a new scheduling scheme of the lock under the distributed environment. Fig. 1 is a schematic flow chart illustrating a scheduling method of a target object in a distributed system according to an embodiment of the present invention. The scheduling method of the present invention can be executed by a server, and the target object mentioned in the method can be a resource, an operation, etc. that need coordination and/or synchronization in a distributed environment.
Referring to FIG. 1, the method begins at step S110 by creating a lock resource object corresponding to a target object, the lock resource object including lock information associated with the target object.
The step of creating a lock resource object may be performed in response to receiving a lock request sent by a particular thread, i.e., a lock resource object may be created for the target object for which the lock request is directed. Each target object has one and only one corresponding lock resource object, so that the lock resource object can be directly reused without reconstructing the lock resource object under the condition that the lock resource object of the target object aimed by the lock request is created.
The lock resource object records therein lock information related to the target object, which may be, for example, an object ID of the target object, a type (e.g., read lock, write lock) and time of a thread holding the target object and a lock corresponding to the target object held by the thread, a type (e.g., read lock, write lock) and time of a thread waiting for the target object and a lock corresponding to the target object requested by the thread, and the like. The object ID may be a preset ID with global uniqueness, which may be used to indicate a target object corresponding to the object ID.
In step S120, in response to receiving the lock request sent by the specific thread, it is determined whether the specific thread can currently acquire the lock corresponding to the target object according to the lock information of the lock resource object of the target object to which the lock request is directed.
The particular thread, when sending the lock request, may send along with the object ID of the target object to which the lock request is directed in order to determine the target object to which the lock request is directed. Thus, the lock request sent by a particular thread may include the object ID of the target object for which the request is directed, the type of lock requested, the time of the request, and so on. In this regard, the reference to "specific" in a specific thread is merely for convenience of description and should not be construed as limiting the present invention, and the specific thread in the present invention may be executed by a use case in a distributed system, and the use case in which the specific thread is executed may be regarded as a client.
In summary, by establishing the lock resource object of each target object in the distributed environment, for the received lock request sent by the specific thread, the present invention can determine whether to allow the specific thread to acquire the lock corresponding to the target object according to the lock information in the lock resource object of the target object to which the lock request is directed. Therefore, the invention can map the lock requests of different specific threads in the distributed environment into the lock requests managed by the same JVM, thereby uniformly managing and scheduling a plurality of lock requests in the distributed environment. Since the application scenario of the scheduling method of the present invention is a distributed system, only one lock request can be preferably processed at the same time, so as to avoid the occurrence of deadlock caused by processing multiple lock requests at the same time.
The following describes in further detail a process of determining whether the specific thread can currently acquire the lock corresponding to the target object according to the lock resource object in step S120.
Referring to fig. 2, step S210 may be executed to lock a lock resource object corresponding to a lock request. For example, a target object may be locked using the keyword synchronized in the Java language, i.e., a locked resource object may be locked.
Then, step S220 is executed to enter the critical section to access the lock resource object, and determine whether the specific thread can currently acquire the lock according to the lock information and the predetermined lock contention logic. Thus, when processing a lock request, a lock resource object of the target object to which the lock request is directed may be locked, and then a critical section of the lock resource object may be entered to determine whether the thread is currently allowed to acquire the lock of the target object to which the lock request is directed. The lock applied to the lock resource object is generally an exclusive lock, that is, during the locking, other lock requests corresponding to the lock resource object cannot acquire the lock of the lock resource object to access the lock resource object, so that the occurrence of the situation that a plurality of lock requests corresponding to the same lock resource object are processed at the same time can be avoided, and therefore, the processing logic of the lock requests can be ensured not to make mistakes.
The lock contention logic described in this disclosure may be thread lock contention logic that emulates a JVM. Specifically, the lock requested by the lock request in the present invention can be divided into a read lock and a write lock, wherein the thread holding the read lock only performs read operation on the corresponding target object, and the thread holding the write lock needs to perform write operation on the corresponding target object. Therefore, the lock contention logic of the present invention may be configured to allow multiple different threads to respectively acquire a read lock for the same target object at the same time; in the case where one thread holds a write lock for a target object, no other thread can hold read and write locks for the target object. That is, the same target object can be held by multiple threads requesting read locks at the same time, but can be held by only one thread requesting write locks at the same time.
Upon determining that the particular thread is currently unable to acquire the lock of the target object, a lock wait message may be sent to the particular thread such that the particular thread initiates a spin wait mode to await release of the lock. Upon determining that the particular thread is currently able to acquire the lock of the target object, the lock may be sent to the particular thread so that the particular thread may hold the target object to perform processing of subsequent business logic.
So far, the process of the scheduling method of the target object in the distributed system of the present invention is basically described with reference to fig. 1 and fig. 2, and details related thereto are further described below.
1. Processing into critical sections
To increase the processing efficiency of lock requests, it may be preferable to process only the necessary operations associated with the current lock request after entering a critical section of the lock resource object, which may be a very time-consuming operation, and unnecessary operations may be processed further after exiting the critical section. For example, operations performed after entering a critical section may include:
(1) lazy cleaning of stale lock data associated with a currently requested target object (which may be referred to as triggered lazy cleaning) reduces interference.
Here, the instance for running the thread in the distributed system may periodically send thread validity information running thereon, so that invalid lock information in the lock resource object may be cleared based on the received thread validity information.
(2) And calculating whether the requested lock can be obtained immediately according to the lock information cached in the current lock resource object.
In the event that it is determined that a particular thread is currently able to acquire a lock, the type and time of the lock acquired by the particular thread may be recorded in the lock resource object and then the critical section may be exited.
When it is determined that a particular thread cannot currently acquire a lock after entering a critical section, the critical section may be exited first, and then the lock request is recorded in a lock resource object, that is, the thread waiting for the target object and the type and time of the lock requested by the thread for the target object are recorded. Specifically, the release of the lock may be waited after exiting the critical section, and when the lock is not released after waiting for more than a predetermined time, a new thread may be established to continue waiting for the release of the lock and to cache a wait message of the lock request in the lock resource object.
(3) And acquiring a lock information snapshot in the current whole cluster, and checking whether the current lock request can cause deadlock through a deadlock algorithm. The specific determination principle of the deadlock algorithm will be described in detail below, and will not be described herein again.
2. Supporting reentrant lock requests
Reentrant locks, also called recursive locks, refer to the same thread after the outer function gets the lock, the inner recursive function still has the code to get the lock, but is not affected. For example, if a thread is executing a method with a lock that calls another method that requires the same lock, the thread may execute the called method directly without acquiring the lock again. The greatest effect of reentrant locks is to avoid deadlocks.
In order to identify the lock request as a reentry lock request, the received lock request and its corresponding specific thread need to be recorded. In the present invention, the machine IP and/or application port and/or specific thread ID when a specific thread sends a lock request can be marked as a specific thread, so as to record the specific thread sending the lock request. In addition, when recording the particular thread of the received lock request, the lock request may also be accurately marked according to the request time for sending the lock request + the particular thread mark.
When the received lock request is identified as a reentry lock request, the specific thread corresponding to the lock request may be allowed to acquire the lock of the targeted object. Specifically, when the specific thread corresponding to the received lock request and the previously recorded specific thread are the same thread and the targeted target object is the same as the targeted target object corresponding to the previously recorded lock request corresponding to the specific thread, the lock request may be considered as a re-entry lock request, and for the lock request identified as the re-entry lock request, the lock request may be directly allowed to acquire the lock of the targeted target object.
3. Deadlock identification algorithm
Because lock requests of a plurality of threads to different target objects may cause deadlock in a distributed environment, after receiving the lock request sent by a specific thread, whether deadlock is caused can be judged preferentially, if deadlock is caused, exception can be thrown out, business logic of the specific thread can be forced to back, and occupied partial resource locks can be released.
Briefly, it may be determined whether a lock request may cause deadlock based on lock information in all lock resource objects in a distributed environment, and when it is determined that the lock request may cause deadlock, it is determined that a particular thread cannot currently acquire a lock.
The following detailed description will discuss the principles of the deadlock checking algorithm of the present invention. FIG. 3 illustrates a graph of dependencies between multiple threads and objects in a distributed environment. T1-T7 in the figure represent threads, R1-R11 represent objects, the solid arrows between threads and objects indicate that a thread holds an object, and the dashed arrows between threads and objects indicate that a thread waits for an object.
As shown in fig. 3, thread T1 waits for object R5 and holds object R2, thread T2 holds object R5 and waits for object R6, thread T4 holds object R6 and waits for object R10, and thread T6 holds object R10 and waits for object R2. Thus, the dependency between threads T1 → T2 → T4 → T6 can cause deadlock. The inventor has observed that if the interval between the solid line and the dotted line appears in the mesh graph and finally a closed loop can be generated, the dependency relationship on the closed loop must be deadlock.
Therefore, snapshots of the held objects and the waiting objects of all threads in the current distributed environment can be obtained, and a mesh graph of the mutual reference relationship between the threads and the objects is constructed by taking the threads and the objects as vertexes. The thread and the target object held by the thread can be connected by a first connection line, and the thread and the target object waiting by the thread can be connected by a second connection line, wherein the first connection line is different from the second connection line. When a certain thread forms a closed loop with other threads and a target object, and a first connection line and a second connection line in the closed loop are separated, it can be determined that a lock request corresponding to the thread causes deadlock.
4. Handling of abnormal situations
The scheduling method of the invention fully considers the following abnormal conditions.
A. The client issues a lock request, but the return of the acquisition server times out, which may be due to an out-of-control lock being generated at the server.
B. The client successfully applies for a lock, but loses contact with the server during subsequent processing of the service code.
C. After a client applies for a lock, the client restarts the application for some reason, and the lock occupied by the client is released at this time, so that threads on other clients in the distributed system cannot be interfered to acquire the lock.
D. The server side identifies and marks a discarded or forced interrupted lock by some rule, but the thread on the client that has held the lock loss is still running efficiently.
E. In a code structure with lock nesting, if an outer resource lock has a problem, a server side can identify and force a lock-related request of inner nesting to also fail, and nested locks are required to be logically consistent.
F. Because the invention supports the writing method of the nested lock, the requests of a plurality of threads to various resource locks can cause deadlock, therefore, when the server side receives the lock request of the client side, whether the deadlock can be caused is detected preferentially, if the deadlock can be caused, the exception is thrown out, the business logic of the lock client side is forced to back, and the occupied part of the resource locks is released.
For the exception condition A, B, C, this may be accomplished by establishing a heartbeat mechanism between the client and the server, e.g., the client may periodically send the server thread information of the valid threads running on it, so that the server may identify invalid lock information in the established lock resource object based on the received thread information.
For the abnormal condition D, the server side may notify the corresponding client side of the lock information determined to be invalid or forcibly terminated, so that the thread on the client side can sense the change of the lock held by the client side and make the client side perform corresponding processing by throwing out the relevant abnormality.
For exception case E, the acquired lock request may be marked to determine which thread under which client it belongs. For example, the machine IP and/or application port and/or thread-specific ID at the time a lock request was sent by a particular thread may be marked as a thread-specific, thereby recording the particular thread that sent the lock request. In addition, when recording the particular thread of the received lock request, the lock request may also be accurately marked according to the request time for sending the lock request + the particular thread mark.
For the abnormal condition F, the deadlock identification algorithm mentioned above can be used to solve the abnormal condition F, which is not described in detail herein.
So far, the scheduling method of the target object in the distributed system of the present invention is described in detail with reference to fig. 1 to 3. In addition, the present invention also provides a distributed system, and fig. 4 is a functional block diagram illustrating a distributed system 400 according to an embodiment of the present invention.
As shown in fig. 4, the distributed system 400 includes a scheduling node 410 and one or more service nodes 420 for running threads, and both the scheduling node 410 and the service nodes 420 may be deployed on machines in a distributed cluster. The scheduling node 410 may be deployed on an independent machine, and the service node 420 may be deployed on other machines in the distributed cluster, where the machine on which the service node 420 is deployed may be regarded as a client, and the machine on which the scheduling node 410 is deployed may be regarded as a server. The server side (i.e., the scheduling node 410) may receive and process the lock requests sent by the various clients (i.e., the service nodes 420).
The scheduling node 410 and the service node 420 of the present invention may be configured so that threads running on the service node 420 are able to send lock requests to the scheduling node 410. In response to receiving the lock request sent by the specific thread, the scheduling node 410 may determine whether the specific thread can currently acquire the lock corresponding to the target object according to the lock resource object of the target object corresponding to the lock request. Where the lock resource object of the target object for which the lock request is directed is not established, the scheduling node 410 may also create a lock resource object corresponding to the target object.
The operations that the scheduling node 410 and the service node 420 can perform in the distributed system 400 of the present invention are described below with reference to fig. 5, where the specific operations that the scheduling node 410 can perform can be referred to as what is described above with reference to fig. 1 to 3, that is, the scheduling node 410 of the present invention can perform the scheduling method of the target object mentioned above, so that only the process of scheduling the target object by the scheduling node 410 and the service node 420 is briefly described here with reference to fig. 5, and all the details involved therein can be referred to as the above-mentioned related description.
As shown in fig. 5, the service node 420 may be considered a lock client and the scheduling node 410 may be considered a lock server. The lock client only provides two interfaces to the outside: the lock client applies for the lock and releases the lock.
In order to deal with various abnormal situations and ensure the consistency and correctness of the lock data, a plurality of internal interactive logics are arranged at the lock client side and the lock server side, but the interactive processing is transparent to a lock user, and the key logics are introduced in proper positions. The following describes the processing flow of the lock client and the lock server with reference to fig. 5.
Referring to fig. 5, first, a user may apply for a read/write lock of a specified target object through a lock application API provided by a lock client, at this time, the lock client performs related packaging processing on the lock request, and then sends the lock request to a lock server. For example, the lock client will package the object ID of the requested target object, the type of the requested lock, and the request time in the lock request, and send the lock request to the lock server.
After receiving the lock request, the lock server may apply for a lock of a common object (i.e., the global lock indicated in fig. 5), and enter a critical section of the common object, which means that only the current lock request is processed in the entire distributed environment at the same time. To improve the throughput of the lock server, it may be preferable to process only the necessary portion of the logic of the current lock request in the critical section of the global lock, and it is a very short logic time consuming process, and the unnecessary portion of the logic may be processed after exiting the critical section of the global lock.
As shown in FIG. 5, the main things done in the critical section of the global lock include: create/multiplex a lock resource object associated with the resource ID; lazy cleaning of stale lock data associated with a currently requested lock resource, reducing interference (triggered lazy cleaning); calculating whether the requested lock can be obtained immediately (if so, returning immediately) according to the lock information cached in the current lock resource object; and obtaining the lock information snapshot in the current whole cluster, and checking whether the current lock request can cause deadlock through a deadlock algorithm.
If the lock server side recognizes that the current lock request is the reentry lock or the current lock request can obtain the lock immediately after calculation, the lock server side can immediately return the successful lock application information and cache the lock information into the internal data structure of the lock resource object. If, through calculation, the current lock request is found to be aborted for various associated reasons, an abort-forced exception may be thrown.
If the current lock request is found not to be able to immediately acquire the lock, it may first attempt to wait for the lock synchronously within 3 seconds (at which time the critical section of the global lock mentioned above has been exited), and if the lock has not been contended for more than 3 seconds, then a new thread is created to continue waiting while the current master thread returns a lock wait flag to the client and caches the lock wait information in the internal data structure of the lock resource object.
And if the lock client obtains the lock request and returns a result that the lock is obtained, the lock client correspondingly continues to process the subsequent service logic. If the lock server throws the exception, the lock client also throws the lock request failure exception. If the waiting lock mark is obtained, the relevant spin waiting logic is realized (the logic is transparent to the lock user), and the spin waiting is terminated by calling back after the lock server side finds that the current lock client side thread obtains the lock.
When a certain lock client thread releases the lock of a certain object, other related lock waiting threads of the lock server end are awakened, and the threads waiting for the lock of the object are awakened and then try to acquire the lock of the object. Specifically, the method includes entering a critical section of a lock resource object, calculating whether a current lock request can obtain a lock according to lock holding and lock waiting information of each thread cached in the lock resource object, performing calculation by simulating processing logic of a thread lock critical section and lock waiting of a JVM, exiting the critical section of the current lock resource object soon after calculation of a result, and not blocking other related lock waiting threads. If a certain thread in dynamic lock waiting finally obtains the lock of the target object, the thread can call back the lock client, inform the certain thread requested by the client of obtaining the lock, and end the spin waiting of the client; and the lock waiting thread awakened by the server side still cannot acquire the lock through calculation, and then continues waiting.
In order to identify and process lock data generated under abnormal conditions and prevent the junk data from interfering with normal lock scheduling, the lock client and the lock server have related threads to check cached lock information at regular time and clear some overdue data; the lock client has a thread to synchronize local effective thread information to the lock server, the lock server uses the data as a reference to clear the invalid lock information of the lock server, and informs the lock client of the lock information which is forcibly interrupted at the lock server, so that the lock processing flow of the lock client is influenced.
The lock server side can also be provided with a callback retry thread for solving the problem of the failure of the lock server side in callback of the lock client side due to network abnormality.
By using the distributed system of the invention, the lock aiming at the target object can be applied at any position of the project where the synchronization processing is needed, so that other threads which also apply for the lock aiming at the same target object in the distributed environment can be synchronized at the position. The invention can replace the global lock of the memcache mechanism to a certain extent, has stronger function, can support the functions of lock read-write separation, lock reentrancy, intelligent deadlock identification, lock embedding sleeve and the like, can design the structure of the service code in a more flexible mode, and has higher throughput than the global lock of the memcache mechanism.
In the distributed system of the invention, the lock client and the lock server can communicate through the HTTP, but the communication mode causes the communication to be time-consuming, and in the HTTP mode, the short-connection communication mode between the lock client and the lock server needs more complex logic when processing abnormal lock data caused by network abnormality. In addition, the distributed system shown in fig. 4 only supports one lock server side to schedule the resource lock, and the expansibility is poor.
After intensive research, the inventor finds that the distributed system of the invention can be re-constructed by introducing a Redis database, and at the moment, division of a scheduling node and a service node is not performed. Each machine initiating a lock request in the distributed system may be both a scheduling node and a service node, that is, the service node may serve as a scheduling node, and the created lock resource object may be stored in a distributed Redis database.
Specifically, the communication between the scheduling node and the service node can be adjusted to simulate the synchronized function of the JVM by identifying the change of the appointed key value in the Redis database and using a simple global lock function under the Redis mechanism, and the lock state change caused by a certain service in the cluster can ensure that any application in the cluster changes the lock state by broadcasting (for example, Netty can be used) and utilizing the wait + notify and loop polling mechanisms inside the single-machine service, so that other machines can be awakened to respond in time, and the effect of the wait + notify in the single JVM is achieved.
Therefore, by utilizing Redis (master-slave backup) + Netty (communication), the problem of the expansibility of the distributed system shown in FIG. 4 can be solved, and a simpler and effective code design can be provided in the aspects of single-point failure of cluster machines, network exception processing and synchronous lock information processing.
In summary, the present invention can manage and schedule locks in a distributed environment, thereby supporting synchronization and coordination of services having dependency or association relationships in the distributed environment, and the lock functions supported by the present invention are as follows:
A. multiple lock requests under the same thread of the client can be identified as the same client thread.
B. The read lock or the write lock is added to the same object, and the read lock and the write lock are separated; the read lock threads aiming at the same resource can run concurrently, and the read lock and the write lock between different client threads are mutually exclusive.
C. Lock reentry is supported for lock requests identified as the same client thread.
D. When a lock request of a client is received, whether deadlock can be caused can be timely and effectively checked through a deadlock identification algorithm, and if deadlock can be caused, an exception is immediately thrown.
E. When the client applies for the lock of the target object, if the target object is found to be held by other client threads, the current lock request can be blocked and wait. After the target object to be waited is released from the lock by other threads, the server can timely identify and simulate a built-in lock scheduling function of the single JVM to select a waiting client thread, and the waiting client thread can timely acquire the lock and stop waiting.
F. The invention considers the influence of various abnormities and faults in the distributed environment on the lock coordination scheduling as much as possible, the parts which can be effectively solved keep the relevant data of the lock consistent and correct through some identification and compensation logics, the parts which are difficult to effectively solve smoothly fail, the complexity of the use of the distributed lock is reduced as much as possible, and the correctness of the lock service is preferentially ensured.
E. The invention provides a simple and clear interface for use externally, and encapsulates the complex logic of lock scheduling.
The distributed system and the scheduling system of the target object in the distributed system according to the present invention have been described in detail above with reference to the accompanying drawings.
Furthermore, the method according to the invention may also be implemented as a computer program comprising computer program code instructions for carrying out the above-mentioned steps defined in the above-mentioned method of the invention. Alternatively, the method according to the present invention may also be implemented as a computer program product comprising a computer readable medium having stored thereon a computer program for executing the above-mentioned functions defined in the above-mentioned method of the present invention. Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (20)

1. A method for scheduling a target object in a distributed system comprises the following steps:
creating a lock resource object corresponding to the target object, the lock resource object including lock information related to the target object, the lock information including an object ID of the target object, a first thread holding the target object, a type of lock held by the first thread for the target object, and a time;
in response to receiving a lock request sent by a specific thread, locking a lock resource object corresponding to the lock request, entering a critical zone to access the lock resource object, and judging whether the specific thread can acquire a lock currently according to the lock information and a preset lock contention logic, wherein the lock request comprises an object ID of a target object to which the lock request is directed, a type of the requested lock and a request time.
2. The scheduling method of claim 1,
the lock information further includes: waiting for a second thread of the target object, a type and time of a lock requested for the target object by the second thread.
3. The scheduling method of claim 1, wherein the predetermined lock contention logic comprises:
allowing a plurality of different threads to respectively acquire read locks for the same target object at the same time;
in the case where one thread holds a write lock for one target object, no other thread can hold read and write locks for the one target object.
4. The scheduling method of claim 1, further comprising:
under the condition that the specific thread cannot acquire the lock currently, exiting the critical zone, and recording the lock request in the lock resource object; and/or
And under the condition that the specific thread is judged to be capable of acquiring the lock currently, recording the type and time of the lock acquired by the specific thread in the lock resource object, and exiting the critical section.
5. The scheduling method of claim 4, wherein exiting the critical section and recording the lock request in the lock resource object if it is determined that the particular thread cannot acquire the lock comprises:
waiting for release of the lock after exiting the critical section;
and when the lock is not released after waiting for the preset time, establishing a new thread to continuously wait for the release of the lock, and caching a waiting message of the lock request into the lock resource object.
6. The scheduling method of claim 4, further comprising:
when the specific thread is judged not to be capable of acquiring the lock of the target object currently, a lock waiting message is sent to the specific thread, so that the specific thread can start a spin waiting mode to wait for the lock; and/or
And when the specific thread is judged to be capable of acquiring the lock of the target object currently, sending the lock to the specific thread so that the specific thread can execute the processing of subsequent business logic.
7. The scheduling method of claim 1, further comprising:
clearing invalid lock information in the lock resource object; and/or
The client is informed of the lock information of the forced interrupt and/or the lock information marked as invalid.
8. The scheduling method of claim 1, further comprising:
recording a target object aimed at by the received lock request and a specific thread sending the lock request;
and when the specific thread corresponding to the received lock request is the same as the previously recorded specific thread and the aimed target object is the same as the previously recorded target object corresponding to the lock request, allowing the specific thread to acquire the lock of the requested target object.
9. The scheduling method of claim 8,
and recording the specific thread sending the lock request according to the machine IP and/or the application port and/or the specific thread ID when the specific thread sends the lock request.
10. The scheduling method of any of claims 1 to 9, further comprising:
judging whether the lock request can cause deadlock based on the lock information of a lock resource object in the distributed system;
upon determining that the lock request may cause a deadlock, determining that the particular thread is not currently able to acquire a lock.
11. The method of scheduling of claim 10, said determining whether the lock request will cause a deadlock comprising:
constructing a mesh graph by taking the thread and the target object as vertexes, wherein the thread is connected with the target object held by the thread by a first connecting line, and the thread is connected with the target object waiting by the thread by a second connecting line;
and in the case that the specific thread, other threads and a target object form a closed ring, and a first wire and a second wire in the closed ring are separated, judging that the lock request can cause deadlock.
12. A distributed system, comprising: a scheduling node and at least one service node for running threads, a thread running on the service node being capable of sending a lock request to the scheduling node,
in response to receiving a lock request sent by a specific thread, the scheduling node locks a lock resource object corresponding to the lock request, enters a critical zone to access the lock resource object, and judges whether the specific thread can acquire a lock currently according to the lock information and a preset lock contention logic, wherein the lock request comprises an object ID, a requested lock type and a request time,
in the case that a lock resource object of a target object to which the lock request is directed is not established, the scheduling node creates a lock resource object corresponding to the target object, the lock resource object including lock information related to the target object, the lock information including an object ID of the target object, a first thread holding the target object, a type of lock held by the first thread for the target object, and a time.
13. The distributed system of claim 12,
the lock information further includes: waiting for a second thread of the target object, a type and time of a lock requested for the target object by the second thread.
14. The distributed system of claim 12, wherein the predetermined lock contention logic comprises:
allowing a plurality of different threads to respectively acquire read locks for the same target object at the same time;
in the case where one thread holds a write lock for one target object, no other thread can hold read and write locks for the one target object.
15. The distributed system of claim 12,
in the event that it is determined that the particular thread is currently unable to acquire a lock, the dispatch node exits the critical section and records the lock request in the lock resource object, and/or
And under the condition that the specific thread is judged to be capable of acquiring the lock currently, the scheduling node records the type and the time of the lock acquired by the specific thread in the lock resource object and exits the critical section.
16. The distributed system of claim 15,
the scheduling node waits for the release of the lock after exiting the critical section,
and when the lock is not released after waiting for the preset time, the scheduling node establishes a new thread to continue waiting for the release of the lock and caches a waiting message of the lock request into the lock resource object.
17. The distributed system of claim 15,
when the specific thread is judged not to be capable of acquiring the lock of the target object currently, the scheduling node sends a lock waiting message to the specific thread so that the specific thread can start a spin waiting mode to wait for the lock; and/or
And when the specific thread is judged to be capable of acquiring the lock of the target object currently, the scheduling node sends the lock to the specific thread so that the specific thread can execute the processing of subsequent business logic.
18. The distributed system of claim 12, further comprising:
the scheduling node also determines whether the lock request will cause deadlock based on lock information in all lock resource objects in the distributed environment,
upon determining that the lock request may cause a deadlock, the scheduling node determines that the particular thread is not currently able to acquire a lock.
19. The distributed system of claim 18,
the dispatching nodes take the threads and the target objects as vertexes to construct a mesh graph, wherein the threads are connected with the target objects held by the threads through first connecting lines, the threads are connected with the target objects waiting by the threads through second connecting lines,
when the specific thread, other threads and a target object form a closed ring, and a first connecting line and a second connecting line in the closed ring are separated, the scheduling node judges that the lock request can cause deadlock.
20. The distributed system of any of claims 12-19,
and the service node sending the lock request serves as the scheduling node, and the created lock resource object is stored in a Redis database.
CN201710092178.7A 2017-02-21 2017-02-21 Distributed system and scheduling method of target object in distributed system Active CN106790694B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710092178.7A CN106790694B (en) 2017-02-21 2017-02-21 Distributed system and scheduling method of target object in distributed system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710092178.7A CN106790694B (en) 2017-02-21 2017-02-21 Distributed system and scheduling method of target object in distributed system

Publications (2)

Publication Number Publication Date
CN106790694A CN106790694A (en) 2017-05-31
CN106790694B true CN106790694B (en) 2020-04-14

Family

ID=58958399

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710092178.7A Active CN106790694B (en) 2017-02-21 2017-02-21 Distributed system and scheduling method of target object in distributed system

Country Status (1)

Country Link
CN (1) CN106790694B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107402822B (en) * 2017-07-06 2018-09-11 腾讯科技(深圳)有限公司 Deadlock treatment method and device
CN109257396B (en) * 2017-07-12 2021-07-09 阿里巴巴集团控股有限公司 Distributed lock scheduling method and device
CN107450991A (en) * 2017-07-24 2017-12-08 无锡江南计算技术研究所 A kind of efficiently distributed global lock coordination approach
CN107632794A (en) * 2017-10-20 2018-01-26 北京小米移动软件有限公司 Read-Write Locks control method and device
CN108572876B (en) * 2018-03-07 2020-11-20 北京神州绿盟信息安全科技股份有限公司 Method and device for realizing read-write lock
CN108959098B (en) * 2018-07-20 2021-11-05 大连理工大学 System and method for testing deadlock defects of distributed system program
CN109246077B (en) * 2018-08-01 2021-06-29 广州唯品会信息科技有限公司 Distributed concurrent transaction verification method, device and computer storage medium
CN109410040A (en) * 2018-11-07 2019-03-01 杭州创金聚乾网络科技有限公司 A kind of match method, device and the equipment of loan application and investment application
CN112988319A (en) * 2019-12-02 2021-06-18 中兴通讯股份有限公司 LVM data processing method and device, computer equipment and computer readable medium
CN112631790A (en) * 2021-01-05 2021-04-09 北京字节跳动网络技术有限公司 Program deadlock detection method and device, computer equipment and storage medium
CN113032131B (en) * 2021-05-26 2021-08-31 天津中新智冠信息技术有限公司 Redis-based distributed timing scheduling system and method
CN113553196A (en) * 2021-06-28 2021-10-26 深圳云之家网络有限公司 Service request processing method and device, computer equipment and storage medium
CN115114305B (en) * 2022-04-08 2023-04-28 腾讯科技(深圳)有限公司 Lock management method, device, equipment and storage medium for distributed database
CN115016849A (en) * 2022-04-19 2022-09-06 展讯通信(上海)有限公司 Electronic system control method, device, equipment and storage medium
CN115934372A (en) * 2023-03-09 2023-04-07 浪潮电子信息产业股份有限公司 Data processing method, system, equipment and computer readable storage medium
CN116663854B (en) * 2023-07-24 2023-10-17 匠人智慧(江苏)科技有限公司 Resource scheduling management method, system and storage medium based on intelligent park

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101470627A (en) * 2007-12-29 2009-07-01 北京天融信网络安全技术有限公司 Method for implementing parallel multi-core configuration lock on MIPS platform
CN102355473A (en) * 2011-06-28 2012-02-15 用友软件股份有限公司 Locking control system in distributed computing environment and method
CN102999378A (en) * 2012-12-03 2013-03-27 中国科学院软件研究所 Read-write lock implement method
CN103488526A (en) * 2013-09-02 2014-01-01 用友软件股份有限公司 System and method for locking business resource in distributed system
CN105700939A (en) * 2016-04-21 2016-06-22 北京京东尚科信息技术有限公司 Method and system for multi-thread synchronization in distributed system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101470627A (en) * 2007-12-29 2009-07-01 北京天融信网络安全技术有限公司 Method for implementing parallel multi-core configuration lock on MIPS platform
CN102355473A (en) * 2011-06-28 2012-02-15 用友软件股份有限公司 Locking control system in distributed computing environment and method
CN102999378A (en) * 2012-12-03 2013-03-27 中国科学院软件研究所 Read-write lock implement method
CN103488526A (en) * 2013-09-02 2014-01-01 用友软件股份有限公司 System and method for locking business resource in distributed system
CN105700939A (en) * 2016-04-21 2016-06-22 北京京东尚科信息技术有限公司 Method and system for multi-thread synchronization in distributed system

Also Published As

Publication number Publication date
CN106790694A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN106790694B (en) Distributed system and scheduling method of target object in distributed system
CN105700939B (en) The method and system of Multi-thread synchronization in a kind of distributed system
US10116766B2 (en) Asynchronous and idempotent distributed lock interfaces
US8458517B1 (en) System and method for checkpointing state in a distributed system
US8654650B1 (en) System and method for determining node staleness in a distributed system
CN107070919B (en) Idempotency for database transactions
US9632828B1 (en) Computing and tracking client staleness using transaction responses
KR100423687B1 (en) Cascading failover of a data management application for shared disk file system in loosely coupled node clusters
CN107590072B (en) Application development and test method and device
US8719432B1 (en) System and method for determining staleness of data received from a distributed lock manager
EP3138013B1 (en) System and method for providing distributed transaction lock in transactional middleware machine environment
US9553951B1 (en) Semaphores in distributed computing environments
US11675622B2 (en) Leader election with lifetime term
CN111258976A (en) Distributed lock implementation method, system, device and storage medium
EP3365774B1 (en) System and method for booting application servers in parallel
CN114064414A (en) High-availability cluster state monitoring method and system
CN110895483A (en) Task recovery method and device
CN110895488A (en) Task scheduling method and device
CN108600284B (en) Ceph-based virtual machine high-availability implementation method and system
CN113220535A (en) Method, device and equipment for processing program exception and storage medium
CN109495528B (en) Distributed lock ownership scheduling method and device
US20050155011A1 (en) Method and system for restricting access in a distributed job environment
US10191959B1 (en) Versioned read-only snapshots of shared state in distributed computing environments
CN116594752A (en) Flow scheduling method, device, equipment, medium and program product
JP2009271858A (en) Computing system and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200807

Address after: 310052 room 508, floor 5, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Patentee after: Alibaba (China) Co.,Ltd.

Address before: 510665 Guangdong city of Guangzhou province Whampoa Tianhe District Road No. 163 Xiping Yun Lu Yun Ping radio square B tower 13 floor 02 unit self

Patentee before: Guangzhou Aijiuyou Information Technology Co.,Ltd.