CN116595099A - Asynchronous processing method and device for high concurrency data - Google Patents

Asynchronous processing method and device for high concurrency data Download PDF

Info

Publication number
CN116595099A
CN116595099A CN202310574797.5A CN202310574797A CN116595099A CN 116595099 A CN116595099 A CN 116595099A CN 202310574797 A CN202310574797 A CN 202310574797A CN 116595099 A CN116595099 A CN 116595099A
Authority
CN
China
Prior art keywords
request
processing
data
processor
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310574797.5A
Other languages
Chinese (zh)
Inventor
孙艳艳
胡阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yanzichu Technology Co ltd
Original Assignee
Beijing Yanzichu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yanzichu Technology Co ltd filed Critical Beijing Yanzichu Technology Co ltd
Priority to CN202310574797.5A priority Critical patent/CN116595099A/en
Publication of CN116595099A publication Critical patent/CN116595099A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the field of computers, in particular to a method and a device for asynchronous processing of high concurrency data. The method includes receiving a user request, parsing request data, allocating a multithreaded processor, establishing a distributed message processing mechanism, concurrency control mechanism, logic processing, and data asynchronous processing queries. The high concurrency data asynchronous processing method provided by the invention can be used for rapidly processing the requests with high concurrency, and the performance and efficiency of the system are improved. Meanwhile, by adopting a multithreading and distributed message processing mechanism, the system is more stable and reliable, and has stronger fault tolerance and expansibility. The high concurrency data asynchronous processing device provided by the invention can effectively realize the method, has high concurrency processing capacity and good expandability, can effectively reduce data processing delay and improves the throughput and response speed of the system.

Description

Asynchronous processing method and device for high concurrency data
Technical Field
The invention relates to the field of computers, in particular to a high concurrency data asynchronous processing method and device.
Background
In the current applications such as point exchange platform and point mall platform, the high concurrency request puts higher demands on the performance and reliability of the system due to the increase of the heat of the user's purchasing activity and the number of participants. The traditional synchronous processing mode is difficult to meet the requirements, and the condition of overstock often occurs, so that the user experience is reduced and the system operates abnormally.
In conventional synchronous processing, when a request arrives, the system immediately processes the request until the next request is processed. This approach has the following problems:
1. performance bottlenecks: the synchronous processing mode cannot fully utilize the resources of the system, and when the concurrent request increases, the performance of the system is easily reduced.
2. Easy overstock: in high concurrency situations, multiple requests to access resources simultaneously may cause resource contention problems, thereby creating a overstock situation.
3. Unreliability: when the system is abnormal or fails to process, the subsequent requests cannot be processed in time, so that the system crashes or the data is inconsistent.
Disclosure of Invention
In order to solve the problems, the invention provides a high concurrency data asynchronous processing method and device, which are used for preventing overstock in a high concurrency shopping scenario and are suitable for applications such as a point exchange platform, a point mall platform and the like. The method effectively solves the data processing problem under the condition of high concurrency by utilizing an asynchronous processing mode, and ensures the reliability and performance of the system.
The invention is realized by adopting the following technical scheme:
in a first aspect, the present invention provides a method for asynchronous processing of high concurrency data, applied to a server, the method comprising the following steps:
Receiving a request of purchasing from a user terminal, creating a unique network identifier according to an IP address of the user terminal, and feeding back data asynchronous processing query identifiers to the user terminal;
analyzing the purchase request to obtain request data, distributing a multithreaded processor according to the category of the request data, and storing the request data into a corresponding processor request queue;
the method comprises the steps that a request queue of each processor is in mapping connection with a processing module through a distributed message processing mechanism, and the processing module reads request data of each request queue;
controlling the synchronous processing quantity of the robbery requests by adopting a thread pool based on a concurrency control mechanism set by the processing module;
the processing module carries out logic processing on the robbed business according to the received request data, updates the order and the stock state according to the processing result, feeds back the processing result to the user side, and queries the robbed processing flow based on the data asynchronous processing query identification.
As a further scheme of the invention, after receiving a request of purchasing from a user side, a unique network identifier is created by combining the IP address of the user side with the current time stamp, wherein the IP address and the current time stamp are spliced to generate a new character string, and the character string is processed by using a hash algorithm to generate the unique identifier.
As a further aspect of the present invention, the processing of the character string using a hash algorithm generates a unique identifier, including the steps of:
performing hash processing by taking a character string generated by splicing the IP address and the current timestamp as input;
carrying out hash processing on the character strings spliced by the IP address and the current timestamp by using an SHA-256 algorithm, calling a digest () method of the hash algorithm, encoding the input character strings into byte sequences by using UTF-8, carrying out hash processing on the byte sequences as input to generate a binary number group with a fixed length, and returning a result after the hash processing;
the result after the hash processing is a binary number group or hexadecimal character string with a fixed length.
As a further aspect of the present invention, before the feedback data asynchronously processes the query identifier to the user side, further comprising generating a unique identifier using a UUID (universal unique identifier) algorithm, the generating the unique identifier using the UUID algorithm comprising the steps of:
acquiring an IP address of a user side by using getRequest () of the httpservletRequest object;
calling UUID.NameUUIDUFromBytes () by taking the obtained IP address as an input parameter to generate a UUID object associated with the IP address;
Converting the generated UUID object into a character string form by using toString () to generate an identifier;
the generated identifier is also used for being returned to the user side as a part of the response data, and the user side stores the identifier.
As a further aspect of the present invention, before the feedback data asynchronously processes the query identifier to the user side, the method further includes generating a unique identifier using a snodfake algorithm, and generating the unique identifier using the snodfake algorithm includes the steps of:
the current timestamp of the preemption request is obtained, and converted into a 41-bit binary number,
identifying each node by using a workbench ID, converting the workbench ID into 22-bit binary numbers, and splicing the two binary numbers into a 64-bit binary number, wherein the sign bit is fixed to be 0;
and converting the generated 64-bit binary number into a decimal number, returning the decimal number to the user side as a unique identifier generated by a Snowflag algorithm, and storing the identifier by the user side.
As a further scheme of the invention, the unique identifier generated by the snoffake algorithm consists of 64-bit binary numbers and comprises three parts:
the first part is 1-bit sign bit, fixed to 0;
The second part is a 41-bit timestamp, accurate to the millisecond level, representing the time when the identifier was generated;
the third part is a 22-bit serial number, guaranteeing the uniqueness of the identifier generated within the same millisecond.
When the request data is obtained by analyzing the purchase request, the character string form of the request data is obtained according to the received purchase request, the character string of the request data is analyzed, the request parameter is extracted, and the request parameter is checked, wherein the request parameter comprises the purchase commodity ID and the purchase quantity, and the commodity information field is included in the purchase commodity ID to judge the category to which the purchase request belongs.
As a further aspect of the present invention, before allocating the multithreaded processor according to the type of the request data, the method further includes:
creating a plurality of thread processors, each thread processor maintaining a request queue;
according to the category of the request data, the request data is put into a corresponding processor request queue;
the processor takes out the request data from the request queue and carries out logic processing on the robbed business;
when each thread processor maintains a request queue, each thread processor uses a blocking queue to realize the request queue, uses a take () method to take out requests from the queue in the main loop of the processor, uses a put () method to put the requests into the request queue of the processor, and each processor maintains its own request queue and uses the mechanism of the thread pool to store the request data to be processed.
As a further aspect of the present invention, the distributed message handling mechanism is a distributed message handling framework that transfers messages from a processor to a processing module application; the method for mapping and connecting the request queue of each processor with the processing module through the distributed message processing mechanism comprises the following steps:
creating a Topic (Topic) in the distributed message processing framework for storing all request data, wherein the request queue of each processor corresponds to a Partition (Partition) in the Topic;
creating a Consumer (Consumer) in the processing module for reading the requested data from the topic;
for each processor, creating a Producer (Producer) for sending the request data to the corresponding partition in the topic;
associating each producer with a corresponding processor request queue map, wherein each producer only sends request data to a corresponding partition;
the processing module consumes the request data from the theme to perform logic processing of the robbery service.
As a further scheme of the invention, the concurrency control mechanism set by the processing module comprises concurrency control in the processor and concurrency control among the processors, wherein the concurrency control in the processor uses a thread pool to control the synchronous processing quantity of the preemption requests, the thread pool is a certain quantity of threads which are created in advance, when the preemption requests come, an idle thread is obtained from the thread pool to process, and when no idle thread exists in the thread pool, a new request is put into a waiting queue to wait for the release of the idle thread; the concurrency control among the processors is controlled by using message queues, each processor is provided with a request queue, when a robbery request arrives, the request is put into the corresponding request queue, and the processing module reads the request data from each request queue to process.
As a further scheme of the invention, the logic processing of the robbery business comprises the following steps:
step 1, checking user permission: checking whether the user has the purchasing authority or not based on the login or registration identity, the grade and the purchasing history information of the user, judging whether the user is allowed to perform purchasing, if not, terminating the purchasing request, and reading the next request data in the processor request queue; if the first-aid purchasing authority is provided, the step 2 is entered;
step 2, IP address authority: reading the IP address of the user terminal based on the network identifier, judging whether repeated purchasing behavior exists in the same user or the same IP address according to log information stored in the server terminal, if so, limiting the purchasing authority, and if not, entering the step 3;
step 3, checking commodity inventory: checking whether the inventory quantity of the commodity meets the demand of the robbery or not based on the robbery quantity of the request data, if the inventory is insufficient, prompting or limiting the robbery quantity of the user, and if the inventory is sufficient, entering the step 4;
step 4, generation of a purchase order: and performing inventory allocation according to the purchase quantity, generating an order, performing payment processing, and feeding back a processing result to the user terminal by the purchase failure information based on the network identifier.
In a second aspect, the present invention also provides a high concurrency data asynchronous processing device, where the high concurrency data asynchronous processing device includes:
a receiver assembly: the method comprises the steps of receiving a request for purchasing from a user side and creating a unique network identifier according to an IP address of the user side;
a parser component: the method comprises the steps of analyzing the robbery request data, distributing a multithreaded processor according to the category of the request data, and storing the request data into a corresponding processor request queue;
mapping connection component: the processing module is used for mapping and connecting the request queues of each processor with the processing module through a distributed message processing mechanism and reading the request data of each request queue;
concurrency control component: the concurrency control mechanism is used for controlling the synchronous processing quantity of the robbery requests by adopting a thread pool based on the setting of the processing module;
a processor assembly: the system comprises a client, a user terminal, a server and a client, wherein the server is used for logically processing the purchase service according to the received request data, updating an order and an inventory state according to a processing result and feeding back the processing result to the user terminal;
query feedback component: the method is used for processing the inquiry identification inquiry and robbery processing flow based on data asynchronization and feeding back to the user side.
Compared with the prior art, the method and the device for introducing the high concurrency data asynchronous processing prevent overstock in a high concurrency shopping scene, are suitable for applications such as a point exchange platform, a point mall platform and the like, and can bring the following beneficial effects:
1. improving the system performance: the asynchronous processing mode fully utilizes system resources, simultaneously processes a plurality of requests, and improves the concurrent processing capacity of the system. And system resources are reasonably distributed through a concurrency control mechanism, so that overload of the system is avoided, and high response speed and stability of the system are maintained.
2. Preventing overstock: by means of asynchronous processing, the request data are sequentially stored in the queue and are processed one by one, so that the problem of resource competition is avoided, and the phenomenon of overstock is effectively prevented. And each request is processed and then subjected to state updating, so that the consistency of the data is ensured.
3. And the user experience is improved: because the system can process more concurrent requests, the user can complete the purchasing operation without waiting for too long, and the satisfaction and experience of the user are improved. The user can acquire the required point exchange commodity more quickly, and the user viscosity and the activity are improved.
4. Ensuring the reliability of the system: when the system is abnormal or fails in processing, the asynchronous processing mode can store the failed request into the queue in time, and retry processing is carried out subsequently, so that the reliability and stability of the system are ensured. Even if the system crashes or has an abnormality, the data cannot be lost, and normal operation can be recovered through a retry mechanism.
5. Scalability and flexibility: the method adopts asynchronous processing and concurrency control mechanism, so that the system has good expansibility and flexibility. Parameters and resource configuration of concurrency control can be adjusted according to actual requirements so as to adapt to application scenes of different scales and loads.
In summary, the application of the high concurrency data asynchronous processing method and the device in the high concurrency shopping scene can effectively improve the system performance, prevent overstock phenomenon, improve the user experience, ensure the system reliability, have good expansibility and flexibility, and have important practical significance and application value for the applications such as the point exchange platform, the point mall platform and the like.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a flow chart of a method of asynchronous processing of highly concurrent data according to one embodiment of the present invention.
Fig. 2 is a flowchart of processing a string using a hash algorithm in a high concurrency data asynchronous processing method according to an embodiment of the present invention.
Fig. 3 is a flow chart of generating unique identifiers using UUID algorithm in a high concurrency data asynchronous processing method according to an embodiment of the present invention.
Fig. 4 is a flowchart of generating a unique identifier using a snodfake algorithm in a high concurrency data asynchronous processing method according to an embodiment of the present application.
FIG. 5 is a flowchart of a mapping connection between a request queue and a processing module in a high concurrency data asynchronous processing method according to an embodiment of the present application.
FIG. 6 is a block diagram illustrating an apparatus for asynchronous processing of high concurrency data according to an embodiment of the present application.
Description of the embodiments
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
It is noted that all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the applications herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "comprising" and "having" and any variations thereof in the description of the application and the claims and the description of the drawings above are intended to cover a non-exclusive inclusion.
The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the disclosure, its application, or uses.
The method and the device for introducing the high concurrency data asynchronous processing prevent overstock in a high concurrency shopping scenario, and are suitable for applications such as a point exchange platform, a point mall platform and the like. In particular, embodiments of the present application are further described below with reference to the accompanying drawings.
Referring now to FIG. 1, FIG. 1 illustrates a flow chart of a method of high concurrency data asynchronous processing in accordance with the present disclosure. For convenience of explanation, only portions relevant to the embodiments of the present application are shown. In an embodiment of the present application, the present embodiment provides a high concurrency data asynchronous processing method, including the following steps:
s10, receiving a request of purchasing from a user terminal, creating a unique network identifier according to an IP address of the user terminal, and feeding back data asynchronous processing query identifiers to the user terminal;
s20, analyzing the robbery request to obtain request data, distributing a multithreaded processor according to the category of the request data, and storing the request data into a corresponding processor request queue;
s30, mapping and connecting a request queue of each processor with a processing module through a distributed message processing mechanism, wherein the processing module reads the request data of each request queue;
S40, controlling the synchronous processing quantity of the robbery requests by adopting a thread pool based on a concurrency control mechanism set by the processing module;
s50, the processing module carries out logic processing on the purchase service according to the received request data, updates the order and the stock state according to the processing result, feeds back the processing result to the user side, and queries the purchase processing flow based on the data asynchronous processing query identifier.
The high concurrency data asynchronous processing method provided by the invention can effectively reduce the data processing delay, improve the throughput and response speed of the system, realize the high concurrency data asynchronous processing by distributing the technologies such as a multithreading processor, a distributed message processing mechanism, a concurrency control mechanism and the like, improve the data processing efficiency and the system performance, and effectively cope with the requirements of large-scale concurrency requests.
In step S10, after receiving the request from the user terminal, a unique network identifier is created by combining the IP address of the user terminal with the current timestamp, where the IP address and the current timestamp are spliced to generate a new character string, and the character string is processed by using a hash algorithm such as MD5 or SHA-1 to generate a unique identifier. For example, on a Web server, IP addresses may be used to identify and track access requests from different users.
The method for processing the character string by using a hash algorithm to generate a unique identifier, as shown in fig. 2, comprises the following steps:
s101, performing hash processing by taking a character string generated by splicing an IP address and a current timestamp as input;
s102, carrying out hash processing on character strings spliced by the IP address and the current timestamp by using an SHA-256 algorithm, calling a digest () method of the hash algorithm, encoding the input character strings into byte sequences by using UTF-8, carrying out hash processing on the byte sequences as input, generating a binary number group with a fixed length, and returning a result after the hash processing;
the result after the hash processing is a binary number group or hexadecimal character string with a fixed length.
The following is an example code that hashes the string spliced by the IP address and the current timestamp using the SHA-256 algorithm to generate a unique identifier:
import hashlib
input_str= "hello world" # input string to be processed
Creating a hash object of SHA-256 algorithm
hasher = hashlib.sha256()
# encode the input string into byte sequence and hash
hasher.update(input_str.encode('utf-8'))
hash_bytes = hasher.digest()
# converting binary groups into hexadecimal character strings
hash_str = hash_bytes.hex()
print (' input string: ", input_str)
print ('result after hash:', hash_str)
The code carries out hash processing on the input character string 'hello world' to generate a hash value of the SHA-256 algorithm, and then converts the hash value into hexadecimal character string output.
It should be noted that the hash algorithm can greatly reduce the probability of collision, especially when the input string length is sufficiently long. Meanwhile, the length of the input character string can be fixed to be a fixed value by using the hash algorithm, and the input character string is convenient to store and transmit.
Before the feedback data of step S10 asynchronously processes the query identifier to the user side, it further includes generating a unique identifier using a UUID (universal unique identifier) algorithm, or generating a unique identifier using a snodfake algorithm.
Wherein the UUID algorithm may generate a 128-bit unique identifier, typically represented as 32 hexadecimal digits, such as: 550e8400-e29b-41d4-a716-446655440000. In Java, a Java. Uteil. UUID class may be used to generate UUIDs. Example codes are as follows:
import java.util.UUID;
UUID uuid = UUID.randomUUID();
string uuidString =uuid. Tostring ();// obtain the string representation of the UUID.
Wherein the snofflake algorithm uses a 64 bit integer to represent a unique identifier. Wherein, the 1 st bit is a sign bit, which indicates positive and negative; bits 2-42 are time stamps, which may represent a time frame of 69 years; bits 43-52 are machine identifications, which can represent 4096 machines; bits 53-64 are serial numbers that may indicate that each machine generates 4096 unique identifiers per millisecond. When using the snodfake algorithm to generate the unique identifier, a different workerId needs to be used on each machine.
In this embodiment, referring to fig. 3, the use of UUID algorithm to generate unique identifiers includes the steps of:
s111, using getRequest () of the httpservletRequest object to acquire the IP address of the user terminal;
s112, calling UUID.NameUUIDUmBytes () to generate a UUID object associated with the IP address by taking the acquired IP address as an input parameter;
s113, converting the generated UUID object into a character string form by using toString () to generate an identifier.
The generated identifier is also used for being returned to the user side as a part of the response data, and the user side stores the identifier.
Example codes are as follows:
import java.util.UUID;
import javax.servlet.http.HttpServletRequest;
public class PurchaseController {
public void handlePurchaseRequest(HttpServletRequest request) {
obtaining IP address of user/1
String ipAddress = request.getRemoteAddr();
Generating unique identifiers using UUID algorithm
UUID uuid = UUID.nameUUIDFromBytes(ipAddress.getBytes());
Converting the generated identifier into character string form
String identifier = uuid.toString();
Return the generated identifier to the client as part of the response data
// ...
}
}
In this embodiment, the identifier generated using the UUID algorithm is a string of 32 hexadecimal digits, which includes a version number and a variant number, and a 128-bit random number. Because of the high degree of randomness and uniqueness of UUID algorithms, they are often used to generate unique identifiers.
It should be noted that, since the IP address is used as the input of the UUID algorithm, if multiple clients on the same machine use the same IP address, the generated identifier may be the same. To avoid this, it may be considered to add some additional random factor, such as the current timestamp or a random number, when generating the identifier.
In this embodiment, referring to fig. 4, the generation of the unique identifier using the snodfake algorithm includes the steps of:
s121, acquiring the current time stamp of the purchase request, converting the current time stamp into 41-bit binary numbers,
s122, using a workbench ID to identify each node, converting the workbench ID into 22-bit binary numbers, and splicing the two binary numbers into a 64-bit binary number, wherein the sign bit is fixed to be 0;
s123, converting the generated 64-bit binary number into a decimal number, returning the decimal number to the user side as a unique identifier generated by a snuffke algorithm, and storing the identifier by the user side.
The unique identifier generated by the snowflag algorithm consists of 64-bit binary numbers and comprises three parts:
the first part is 1-bit sign bit, fixed to 0;
The second part is a 41-bit timestamp, accurate to the millisecond level, representing the time when the identifier was generated;
the third part is a 22-bit serial number, guaranteeing the uniqueness of the identifier generated within the same millisecond.
In this embodiment, the snoowflag algorithm uses a workbench ID to identify the uniqueness of each node (process, server), and the workbench ID may be specified when the snoowflag algorithm is initialized. The snofflake algorithm needs to ensure that identifiers generated between different nodes are not repeated, and can be ensured by using different serial numbers for different nodes at the same time.
In step S20, when the request data is obtained by analyzing the request for purchase, a character string form of the request data is obtained according to the received request for purchase, the character string of the request data is analyzed, a request parameter is extracted, and the request parameter is checked, wherein the request parameter includes an ID of the product for purchase and a purchase quantity, and the ID of the product for purchase includes a product information field to determine a category to which the request for purchase belongs.
In step S20, before the multi-thread processor is allocated according to the type of the request data, the method further includes:
creating a plurality of thread processors, each thread processor maintaining a request queue;
According to the category of the request data, the request data is put into a corresponding processor request queue;
the processor takes out the request data from the request queue and carries out logic processing on the robbed business;
when each thread processor maintains a request queue, each thread processor uses a blocking queue to realize the request queue, uses a take () method to take out requests from the queue in the main loop of the processor, uses a put () method to put the requests into the request queue of the processor, and each processor maintains its own request queue and uses the mechanism of the thread pool to store the request data to be processed.
In this embodiment, the distributed message processing mechanism is a distributed message processing framework that transmits messages from a processor to a processing module application; referring to fig. 5, the mapping connection of the request queue of each processor with the processing module by the distributed message processing mechanism includes the following steps:
s301, creating a Topic (Topic) in a distributed message processing framework, wherein a request queue of each processor corresponds to a Partition (Partition) in the Topic;
S302, creating a Consumer (Consumer) in a processing module, wherein the Consumer is used for reading request data from a theme;
s303, for each processor, creating a Producer (Producer) for sending request data to a corresponding partition in the theme;
s304, mapping and associating each producer with a corresponding processor request queue, wherein each producer only sends request data to a corresponding partition;
s305, the processing module consumes the request data from the theme to perform logic processing of the robbery business.
In this embodiment, the concurrency control mechanism set by the processing module includes concurrency control in the processor and concurrency control between processors, where the concurrency control in the processor uses a thread pool to control the synchronous processing number of the preemption requests, the thread pool is a certain number of threads created in advance, when the preemption requests arrive, an idle thread is obtained from the thread pool to process, and when there is no idle thread in the thread pool, a new request is put into a waiting queue to wait for release of the idle thread; the concurrency control among the processors is controlled by using message queues, each processor is provided with a request queue, when a robbery request arrives, the request is put into the corresponding request queue, and the processing module reads the request data from each request queue to process.
In step S50, the logic processing of the robbery service includes the steps of:
step 1, checking user permission: checking whether the user has the purchasing authority or not based on the login or registration identity, the grade and the purchasing history information of the user, judging whether the user is allowed to perform purchasing, if not, terminating the purchasing request, and reading the next request data in the processor request queue; if the first-aid purchasing authority is provided, the step 2 is entered;
step 2, IP address authority: reading the IP address of the user terminal based on the network identifier, judging whether repeated purchasing behavior exists in the same user or the same IP address according to log information stored in the server terminal, if so, limiting the purchasing authority, and if not, entering the step 3;
step 3, checking commodity inventory: checking whether the inventory quantity of the commodity meets the demand of the robbery or not based on the robbery quantity of the request data, if the inventory is insufficient, prompting or limiting the robbery quantity of the user, and if the inventory is sufficient, entering the step 4;
step 4, generation of a purchase order: and performing inventory allocation according to the purchase quantity, generating an order, performing payment processing, and feeding back a processing result to the user terminal by the purchase failure information based on the network identifier.
In summary, the high concurrency data asynchronous processing method of the invention has the following advantages:
1. improving concurrency processing capability: the concurrent processing capacity of the server can be greatly improved by adopting the multithread processor and the thread pool to control the synchronous processing quantity of the robbery requests, and the requirement of a large number of users for accessing the system simultaneously is met.
2. Request response time is reduced: the request of the user can be responded quickly through asynchronous processing and query and preemptive processing flow based on the data asynchronous processing query identification, the response time of the request is reduced, and the user experience is improved.
3. The stability of the system is improved: by adopting a distributed message processing mechanism, the request queue of each processor is connected with the processing module in a mapping way, so that the influence of single node faults on the whole system is avoided, and the stability of the system is improved.
4. System scalability is enhanced: the dynamic creation and destruction of the processor can dynamically adjust according to the system load condition, so as to realize the self-adaption and expandability of the system and cope with the increasing user quantity and concurrent requests.
5. The system safety is improved: by creating a unique network identifier for the IP address of the user terminal and feeding back the data asynchronous processing query identifier to the user terminal, the security of the system can be improved, and the loss of the system caused by malicious attack and misoperation can be avoided.
As shown in fig. 6, fig. 6 is a block diagram of a high concurrency data asynchronous processing device according to an embodiment of the present application, where the high concurrency data asynchronous processing device according to an embodiment of the present application includes a receiver component 401, a parser component 402, a mapping connection component 403, a concurrency control component 404, a processor component 405, and a query feedback component 406.
The receiver component 401 is configured to receive a request for purchase from a user terminal, and create a unique network identifier according to an IP address of the user terminal.
The parser component 402 is configured to parse the request data for purchase, allocate a multithreaded processor according to the category of the request data, and store the request data in a corresponding processor request queue.
The mapping connection component 403 is configured to map and connect, through a distributed message processing mechanism, a request queue of each processor with a processing module, and implement a processing module to read request data of each request queue.
The concurrency control component 404 is configured to control the number of concurrent processing of the preemptive request by using a thread pool based on a concurrency control mechanism set by the processing module.
The processor component 405 is configured to logically process the robbery service according to the received request data, update the order and the inventory status according to the processing result, and feed back the processing result to the user side.
The query feedback component 406 is configured to asynchronously process the query identifier, query the process flow, and feed back the process flow to the user terminal.
The high concurrency data asynchronous processing device adopts the steps of the high concurrency data asynchronous processing method when executing. When the device is applied to the asynchronous processing of high concurrency data of the point mall platform, the user can exchange commodities by using points assuming that the point mall platform is provided. In high concurrency situations, the object of the present invention is to prevent overstock while maintaining a high response speed.
Firstly, the invention adopts the method mentioned in the high concurrency data asynchronous processing method and the device to process the exchange request of the user. Specifically, when a user submits a redemption request, the present invention generates a unique identifier based on their IP address and stores the request data in a corresponding processor request queue. Then, the processing module reads the request data in the queue and logically processes the request. If the inventory is sufficient, reducing the number of commodities, setting the order state to be completed, and feeding back the processing result to the user; otherwise, the order status is set to cancelled and a stock out message is sent to the user.
To prevent overstock, the present invention requires checking the inventory quantity at each commodity redemption. In practice, there may be multiple sources of inventory, such as multiple warehouses or multiple suppliers. To maintain consistency of inventory quantities, the present invention may employ a distributed lock technique to ensure that only one request can update inventory quantities at a time. Specifically, when the processing module receives a redemption request, it may attempt to acquire a distributed lock. If the lock is successfully acquired, the processing module will update the inventory quantity and release the lock. If the acquire lock fails, indicating that there are other requests to update the inventory quantity, the processing module will return an error message of insufficient inventory.
In addition, in order to maintain a higher response speed, the invention can use a caching technique to reduce the number of accesses to the database. In particular, the present invention may incorporate a cache in the processing module for storing inventory quantities and order status of the goods. When the user submits a redemption request, the processing module may first check the amount of inventory in the cache. If the number in the cache is sufficient, the processing module will update the cache and the database and feed back the processing results to the user. Otherwise, the processing module will return an error message of insufficient inventory and will not update the cache and database.
The high concurrency data asynchronous processing apparatus of the present invention may use a distributed message queue to implement a message processing mechanism such as Apache Kafka or rabkitmq. The present invention also requires the use of a thread pool to control the amount of synchronous processing of requests and the use of distributed lock technology to ensure consistency in inventory amounts. Finally, the present invention may use caching techniques in the processing module to reduce the number of accesses to the database.
When the device is applied to the high concurrency data asynchronous processing of one point exchange platform, the platform provides various commodity exchange services, and a user can use the points for exchange. Under the high concurrency shopping scene, the adoption of the high concurrency data asynchronous processing method and the device provided by the invention can prevent overstock and improve the availability and the stability of the platform.
Assuming a point redemption platform, the user can redeem the points for the merchandise. In the case of high concurrency, the invention aims to prevent overstock, and at the same time maintain high response speed, and is implemented as follows:
1. constructing a distributed message processing mechanism: and constructing a message queue service, sending the robbery request data to the message queues, mapping and connecting the message queues with a processing module, and reading the request data of each request queue by the processing module.
2. Creating a unique network identifier according to the IP address of the user terminal, and feeding back data asynchronous processing query identifiers to the user terminal: and forwarding the request of the user terminal to different processing nodes for processing by adopting a reverse proxy server, generating a unique network identifier according to the IP address of the user terminal, and returning the identifier to the user terminal.
3. Analyzing the purchase request to obtain request data: the HTTP protocol is adopted to transmit the request data, and after the server receives the request, the request data is analyzed to obtain information such as commodity IDs and quantity in the request data.
4. The multithreaded processor is allocated according to the category of the request data, and the request data is stored in a corresponding processor request queue: and distributing the request data to different processor request queues according to commodity IDs in the request data, wherein each processor request queue corresponds to one processing thread.
5. Based on a concurrency control mechanism set by the processing module, controlling the synchronous processing quantity of the robbery requests by adopting a thread pool: the thread pool technology is adopted to control the synchronous processing quantity of the robbery requests, the size of the thread pool is dynamically adjusted according to the processing capacity of the processor, and the problem that too high server load is caused by too many requests is avoided.
6. The processing module carries out logic processing on the robbed business according to the received request data, updates the order and the stock state according to the processing result, and feeds back the processing result to the user terminal: the processing module reads the request data from the request queue, logically processes the robbery request, judges whether the inventory is enough, updates the order and the inventory state, and feeds back the processing result to the user side.
By adopting the high concurrency data asynchronous processing method and device provided by the invention, the phenomenon of supersensitive shopping can be effectively prevented, the usability and stability of the platform are improved, and the shopping experience of a user is improved.
In summary, in the method and device for asynchronous processing of high concurrency data provided by the invention, multiple requests can be processed simultaneously by using an asynchronous processing mode, and overload of a system is avoided by a concurrency control mechanism. Meanwhile, the asynchronous processing mode can also improve the reliability of the system, and the consistency of data is ensured through a reasonable state updating mechanism.
The method can adopt the techniques of message queue, event driving and the like to realize asynchronous processing, and the processing module acquires the data from the queue one by one to process the data by storing the request data into the queue. The concurrency control mechanism can limit the number of requests processed simultaneously through a thread pool or other concurrency control technologies, so that the stability and reliability of the system are ensured.
By introducing the high concurrency data asynchronous processing method and device, the overstock problem in applications such as an integral exchange platform and an integral mall platform can be effectively solved, concurrency processing capacity and performance of the system are improved, user experience is improved, and stable operation of the system is ensured.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (10)

1. The high concurrency data asynchronous processing method is characterized by comprising the following steps of:
receiving a request of purchasing from a user terminal, creating a unique network identifier according to an IP address of the user terminal, and feeding back data asynchronous processing query identifiers to the user terminal;
analyzing the purchase request to obtain request data, distributing a multithreaded processor according to the category of the request data, and storing the request data into a corresponding processor request queue;
the method comprises the steps that a request queue of each processor is in mapping connection with a processing module through a distributed message processing mechanism, and the processing module reads request data of each request queue;
Controlling the synchronous processing quantity of the robbery requests by adopting a thread pool based on a concurrency control mechanism set by the processing module;
the processing module carries out logic processing on the robbed business according to the received request data, updates the order and the stock state according to the processing result, feeds back the processing result to the user side, and queries the robbed processing flow based on the data asynchronous processing query identification.
2. The high concurrency data asynchronous processing method of claim 1, wherein after receiving the request from the user side, the unique network identifier is created by combining the user side IP address with the current timestamp, wherein the IP address and the current timestamp are spliced to generate a new character string, and the character string is processed by using a hash algorithm to generate the unique identifier.
3. The method for asynchronous processing of highly concurrent data according to claim 2, wherein the character string is processed using a hash algorithm to generate a unique identifier, comprising the steps of:
performing hash processing by taking a character string generated by splicing the IP address and the current timestamp as input;
carrying out hash processing on the character strings spliced by the IP address and the current timestamp by using an SHA-256 algorithm, calling a digest () method of the hash algorithm, encoding the input character strings into byte sequences by using UTF-8, carrying out hash processing on the byte sequences as input to generate a binary number group with a fixed length, and returning a result after the hash processing;
The result after the hash processing is a binary number group or hexadecimal character string with a fixed length.
4. The method of high concurrency data asynchronous processing of claim 1, wherein feeding back the data asynchronous processing query identifier to the client, further comprising generating a unique identifier using a UUID algorithm, the generating a unique identifier using the UUID algorithm comprising the steps of:
acquiring an IP address of a user side by using getRequest () of the httpservletRequest object;
calling UUID.NameUUIDUFromBytes () by taking the obtained IP address as an input parameter to generate a UUID object associated with the IP address;
converting the generated UUID object into a character string form by using toString () to generate an identifier;
the generated identifier is also used for being returned to the user side as a part of the response data, and the user side stores the identifier.
5. The method for asynchronous processing of high concurrency data according to claim 1, wherein feeding back the query identifier to the client side further comprises generating a unique identifier using a snodfake algorithm, the generating a unique identifier using a snodfake algorithm comprising the steps of:
The current timestamp of the preemption request is obtained, and converted into a 41-bit binary number,
identifying each node by using a workbench ID, converting the workbench ID into 22-bit binary numbers, and splicing the two binary numbers into a 64-bit binary number, wherein the sign bit is fixed to be 0;
and converting the generated 64-bit binary number into a decimal number, returning the decimal number to the user side as a unique identifier generated by a Snowflag algorithm, and storing the identifier by the user side.
6. The method for asynchronous processing of high concurrency data according to claim 4 or 5, wherein when the request data is obtained by analyzing the request for purchase, a character string form of the request data is obtained according to the received request for purchase, the character string of the request data is analyzed, request parameters are extracted, and the request parameters are checked, wherein the request parameters comprise an ID of the commodity for purchase and the purchase quantity, and the ID of the commodity for purchase comprises a commodity information field to judge the category to which the request for purchase belongs.
7. The method of high concurrency data asynchronous processing as recited in claim 6, further comprising, prior to assigning the multithreaded processor based on the class of requested data:
Creating a plurality of thread processors, each thread processor maintaining a request queue;
according to the category of the request data, the request data is put into a corresponding processor request queue;
the processor takes out the request data from the request queue and carries out logic processing on the robbed business;
when each thread processor maintains a request queue, each thread processor uses a blocking queue to realize the request queue, uses a take () method to take out requests from the queue in the main loop of the processor, uses a put () method to put the requests into the request queue of the processor, and each processor maintains its own request queue and uses the mechanism of the thread pool to store the request data to be processed.
8. The high concurrency data asynchronous processing method of claim 1, wherein the distributed message processing mechanism is a distributed message processing framework that transfers messages from a processor to a processing module application; the method for mapping and connecting the request queue of each processor with the processing module through the distributed message processing mechanism comprises the following steps:
creating a topic in the distributed message processing framework for storing all the request data, wherein the request queue of each processor corresponds to a partition in the topic;
Creating a consumer in the processing module for reading the requested data from the topic;
for each processor, creating a producer for sending request data to a corresponding partition in the topic;
associating each producer with a corresponding processor request queue map, wherein each producer only sends request data to a corresponding partition;
the processing module consumes the request data from the theme to perform logic processing of the robbery service.
9. The method for asynchronous processing of high concurrency data according to claim 1, wherein the concurrency control mechanism set by the processing module comprises concurrency control inside the processor and concurrency control among the processors, wherein the concurrency control inside the processor uses a thread pool to control the synchronous processing quantity of the preemption requests, the thread pool is a certain number of threads created in advance, when the preemption requests come, an idle thread is obtained from the thread pool to process, and when no idle thread exists in the thread pool, a new request is put into a waiting queue to wait for release of the idle thread; the concurrency control among the processors is controlled by using message queues, each processor is provided with a request queue, when a robbery request arrives, the request is put into the corresponding request queue, and the processing module reads the request data from each request queue to process.
10. A high concurrency data asynchronous processing device, characterized in that the high concurrency data asynchronous processing device adopts the high concurrency data asynchronous processing method according to any one of claims 1-9 to prevent overstock in a high concurrency shopping scenario; the high concurrency data asynchronous processing device comprises:
a receiver assembly: the method comprises the steps of receiving a request for purchasing from a user side and creating a unique network identifier according to an IP address of the user side;
a parser component: the method comprises the steps of analyzing the robbery request data, distributing a multithreaded processor according to the category of the request data, and storing the request data into a corresponding processor request queue;
mapping connection component: the processing module is used for mapping and connecting the request queues of each processor with the processing module through a distributed message processing mechanism and reading the request data of each request queue;
concurrency control component: the concurrency control mechanism is used for controlling the synchronous processing quantity of the robbery requests by adopting a thread pool based on the setting of the processing module;
a processor assembly: the system comprises a client, a user terminal, a server and a client, wherein the server is used for logically processing the purchase service according to the received request data, updating an order and an inventory state according to a processing result and feeding back the processing result to the user terminal;
Query feedback component: the method is used for processing the inquiry identification inquiry and robbery processing flow based on data asynchronization and feeding back to the user side.
CN202310574797.5A 2023-05-22 2023-05-22 Asynchronous processing method and device for high concurrency data Pending CN116595099A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310574797.5A CN116595099A (en) 2023-05-22 2023-05-22 Asynchronous processing method and device for high concurrency data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310574797.5A CN116595099A (en) 2023-05-22 2023-05-22 Asynchronous processing method and device for high concurrency data

Publications (1)

Publication Number Publication Date
CN116595099A true CN116595099A (en) 2023-08-15

Family

ID=87607721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310574797.5A Pending CN116595099A (en) 2023-05-22 2023-05-22 Asynchronous processing method and device for high concurrency data

Country Status (1)

Country Link
CN (1) CN116595099A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117453422A (en) * 2023-12-22 2024-01-26 南京研利科技有限公司 Data processing method, device, electronic equipment and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107872398A (en) * 2017-06-25 2018-04-03 平安科技(深圳)有限公司 High concurrent data processing method, device and computer-readable recording medium
CN108319508A (en) * 2017-01-18 2018-07-24 ***通信集团公司 HTTP synchronization requests switch to the method and server of asynchronous process
CN108418821A (en) * 2018-03-06 2018-08-17 北京焦点新干线信息技术有限公司 Redis and Kafka-based high-concurrency scene processing method and device for online shopping system
CN110532111A (en) * 2019-08-29 2019-12-03 深圳前海环融联易信息科技服务有限公司 High concurrent requests asynchronous processing method, device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108319508A (en) * 2017-01-18 2018-07-24 ***通信集团公司 HTTP synchronization requests switch to the method and server of asynchronous process
CN107872398A (en) * 2017-06-25 2018-04-03 平安科技(深圳)有限公司 High concurrent data processing method, device and computer-readable recording medium
CN108418821A (en) * 2018-03-06 2018-08-17 北京焦点新干线信息技术有限公司 Redis and Kafka-based high-concurrency scene processing method and device for online shopping system
CN110532111A (en) * 2019-08-29 2019-12-03 深圳前海环融联易信息科技服务有限公司 High concurrent requests asynchronous processing method, device, computer equipment and storage medium

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
KUOTIAN: "分布式id生成(UUID、雪花算法snowflake)", vol. 1, pages 157 - 159, Retrieved from the Internet <URL:https://www.cnblogs.com/kuotian/p/12869914.html> *
富亚军: "《企业互联网架构原理与实践》", vol. 1, 31 May 2021, 机械工业出版社, pages: 157 - 158 *
李争: "面向信息集成应用的数据访问网关的研究与实现", 《中国优秀硕士学位论文全文数据库·信息科技辑》, 15 October 2011 (2011-10-15), pages 1 - 67 *
李建英: "《大数据基本处理框架原理与实践》", vol. 1, 机械工业出版社, pages: 178 - 180 *
桑楠 等: "《嵌入式***原理及应用开发技术 第2版》", vol. 2, 31 January 2008, 高等教育出版社, pages: 113 - 180 *
毋建军: "《Java Web核心技术》", vol. 1, 31 May 2015, 北京邮电大学出版社, pages: 201 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117453422A (en) * 2023-12-22 2024-01-26 南京研利科技有限公司 Data processing method, device, electronic equipment and computer readable storage medium
CN117453422B (en) * 2023-12-22 2024-03-01 南京研利科技有限公司 Data processing method, device, electronic equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
US11632441B2 (en) Methods, systems, and devices for electronic note identifier allocation and electronic note generation
US11411897B2 (en) Communication method and communication apparatus for message queue telemetry transport
CN108650262B (en) Cloud platform expansion method and system based on micro-service architecture
CN111885050B (en) Data storage method and device based on block chain network, related equipment and medium
CN108063813B (en) Method and system for parallelizing password service network in cluster environment
US7937704B2 (en) Distributed computer
CN111447185A (en) Processing method of push information and related equipment
CN111698315B (en) Data processing method and device for block and computer equipment
CN110336848B (en) Scheduling method, scheduling system and scheduling equipment for access request
CN110580305B (en) Method, apparatus, system and medium for generating identifier
CN112436997B (en) Chat room message distribution method, message distribution system and electronic equipment
CN116595099A (en) Asynchronous processing method and device for high concurrency data
US20230370285A1 (en) Block-chain-based data processing method, computer device, computer-readable storage medium
CN114024972A (en) Long connection communication method, system, device, equipment and storage medium
CN111988418B (en) Data processing method, device, equipment and computer readable storage medium
CN111541662B (en) Communication method based on binary communication protocol, electronic equipment and storage medium
CN116467091A (en) Message processing method, device, equipment and medium based on message middleware
CN112200680B (en) Block link point management method, device, computer and readable storage medium
WO2024103943A1 (en) Service processing method and apparatus, storage medium, and device
CN111835809B (en) Work order message distribution method, work order message distribution device, server and storage medium
CN109144919B (en) Interface switching method and device
CN113596105B (en) Content acquisition method, edge node and computer readable storage medium
CN111240867A (en) Information communication system and method
CN114844910B (en) Data transmission method, system, equipment and medium of distributed storage system
CN117539649B (en) Identification management method, equipment and readable storage medium of distributed cluster

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination