CN112114976B - Service processing method, device, equipment and storage medium - Google Patents

Service processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN112114976B
CN112114976B CN202010839637.5A CN202010839637A CN112114976B CN 112114976 B CN112114976 B CN 112114976B CN 202010839637 A CN202010839637 A CN 202010839637A CN 112114976 B CN112114976 B CN 112114976B
Authority
CN
China
Prior art keywords
processed
event
service
preset
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010839637.5A
Other languages
Chinese (zh)
Other versions
CN112114976A (en
Inventor
叶旺旺
李雄峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dasouche Financial Leasing Co ltd
Original Assignee
Zhejiang Dasouche Financial Leasing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dasouche Financial Leasing Co ltd filed Critical Zhejiang Dasouche Financial Leasing Co ltd
Priority to CN202010839637.5A priority Critical patent/CN112114976B/en
Publication of CN112114976A publication Critical patent/CN112114976A/en
Application granted granted Critical
Publication of CN112114976B publication Critical patent/CN112114976B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The embodiment of the application provides a service processing method, device, equipment and storage medium, and relates to the technical field of computers. The method comprises the following steps: the method comprises the steps that a server side obtains at least one event to be processed of a service to be processed; distributing the acquired event to be processed based on a plurality of preset storage positions to obtain a target storage position for storing the event to be processed; and storing the event to be processed to a target storage position so as to enable the business processing thread corresponding to the target storage position to process the event to be processed correspondingly. By the embodiment of the application, parallel processing of a plurality of events is realized, time delay of the events is reduced, timeliness and processing efficiency of the events are improved, and further processing efficiency of the service is improved.

Description

Service processing method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a service processing method, apparatus, device, and storage medium.
Background
With the continuous development of the technology level, numerous business processing modes are continuously developed to automation and intellectualization. Currently, in some automated business processes, a plurality of events are generated, and the plurality of events of a plurality of businesses are processed one by a processing thread. However, in this processing method, when the generation speed of the event is greater than the processing speed of the processing thread, the event backlog is caused, and thus the real-time performance and the processing efficiency of the event are affected.
Disclosure of Invention
One or more embodiments of the present disclosure are directed to a method, an apparatus, a device, and a storage medium for processing a service, so as to solve the problems of low processing efficiency, poor timeliness, and the like in the current service processing process.
To solve the above technical problems, one or more embodiments of the present specification are implemented as follows:
in a first aspect, an embodiment of the present application provides a service processing method, which is applied to a server, and includes:
acquiring at least one event to be processed of a service to be processed;
distributing the event to be processed based on a plurality of preset storage positions to obtain a target storage position for storing the event to be processed;
and storing the event to be processed to the target storage position so that the business processing thread corresponding to the target storage position carries out corresponding processing on the event to be processed.
In a second aspect, an embodiment of the present application provides a service processing method, which is applied to a client, and includes:
acquiring service processing information of a service to be processed;
if the preset service processing conditions are met according to the service processing information, generating at least one event to be processed of the service to be processed;
And sending a service processing request to a server according to the event to be processed, so that the server distributes the event to be processed based on a plurality of preset storage positions to obtain a target storage position for storing the event to be processed, and storing the event to be processed to the target storage position, so that a service processing thread corresponding to the target storage position carries out corresponding processing on the event to be processed.
In a third aspect, an embodiment of the present application provides a service processing apparatus, which is applied to a server, including:
the acquisition module is used for acquiring at least one event to be processed of the service to be processed;
the distribution module is used for carrying out distribution processing on the event to be processed based on a plurality of preset storage positions to obtain a target storage position for storing the event to be processed;
and the storage module is used for storing the event to be processed to the target storage position so as to enable the business processing thread corresponding to the target storage position to process the event to be processed correspondingly.
In a fourth aspect, an embodiment of the present application provides a service processing apparatus, applied to a client, including:
the acquisition module is used for acquiring service processing information of the service to be processed;
The generating module is used for generating at least one event to be processed of the service to be processed if the preset service processing condition is determined to be met according to the service processing information;
the sending module is used for sending a service processing request to a server according to the event to be processed, so that the server distributes the event to be processed based on a plurality of preset storage positions to obtain a target storage position for storing the event to be processed, and storing the event to be processed to the target storage position, so that a service processing thread corresponding to the target storage position carries out corresponding processing on the event to be processed.
In a fifth aspect, an embodiment of the present application provides a service processing device, including: a processor, and a memory arranged to store computer executable instructions; the computer executable instructions, when executed, cause the processor to perform the steps of the method described in the first aspect above, or to perform the steps of the method described in the second aspect above.
In a sixth aspect, embodiments of the present application provide a computer readable storage medium having a computer program stored thereon, the computer program implementing the steps of the method described in the first aspect or implementing the steps of the method described in the second aspect when being executed by a processor.
In the service processing method, the device and the equipment provided by the embodiment of the application, when a server acquires at least one event to be processed of a service to be processed, the server distributes the event to be processed based on a plurality of preset storage positions to obtain a target storage position for storing the event to be processed; and storing the event to be processed to a target storage position so as to enable the business processing thread corresponding to the target storage position to process the event to be processed correspondingly. Therefore, by setting a plurality of storage positions and a plurality of corresponding service processing threads, the plurality of service processing threads can process the events to be processed in the corresponding storage positions in a parallel manner, so that the time delay of the events is greatly reduced, the timeliness and the processing efficiency of the events are improved, and the processing efficiency of the service is further improved.
Drawings
For a clearer description of one or more embodiments of the present description or of the solutions of the prior art, the drawings that are necessary for the description of the embodiments or of the prior art will be briefly described, it being apparent that the drawings in the description that follow are only some of the embodiments described in the description, from which, for a person skilled in the art, other drawings can be obtained without inventive faculty.
Fig. 1 is a schematic view of a scenario of a service processing method according to one or more embodiments of the present disclosure;
fig. 2 is a schematic flow diagram of a first service processing method according to one or more embodiments of the present disclosure;
fig. 3 is a schematic diagram of a second flow of a service processing method according to one or more embodiments of the present disclosure;
fig. 4 is a schematic flow diagram of a third service processing method according to one or more embodiments of the present disclosure;
fig. 5 is a fourth flowchart of a service processing method according to one or more embodiments of the present disclosure;
fig. 6 is a schematic diagram of a fifth flow of a service processing method according to one or more embodiments of the present disclosure;
fig. 7 is a sixth flowchart of a service processing method according to one or more embodiments of the present disclosure;
fig. 8 is a schematic diagram of a seventh flow of a service processing method according to one or more embodiments of the present disclosure;
fig. 9 is a schematic diagram of an eighth flow chart of a service processing method according to one or more embodiments of the present disclosure;
fig. 10 is a schematic diagram of a ninth flow of a service processing method according to one or more embodiments of the present disclosure;
FIG. 11 is a schematic diagram illustrating a first module composition of a service processing device according to one or more embodiments of the present disclosure;
fig. 12 is a schematic diagram of a second module composition of a service processing device according to one or more embodiments of the present disclosure;
fig. 13 is a schematic structural diagram of a service processing device according to one or more embodiments of the present disclosure.
Detailed Description
In order to enable a person skilled in the art to better understand the technical solutions in one or more embodiments of the present specification, the technical solutions in one or more embodiments of the present specification will be clearly and completely described below with reference to the drawings in one or more embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one or more embodiments of the present disclosure without inventive faculty, are intended to be within the scope of the present disclosure.
Fig. 1 is a schematic application scenario of a service processing method according to one or more embodiments of the present disclosure, where, as shown in fig. 1, a plurality of storage locations and service processing threads corresponding to each storage location are preset. In order to facilitate distinguishing between each storage location and each service processing thread, each storage location is denoted as storage location 0, storage location 1, storage location 2 …, and each service processing thread is denoted as service processing thread 0, service processing thread 1, service processing thread 2 …, and service processing thread n, where n is a positive integer. When the server acquires at least one event to be processed of the service to be processed, distributing the event to be processed based on the preset plurality of storage positions to obtain a target storage position for storing the event to be processed, wherein the target storage position is a storage position 2; storing the event to be processed to a target storage position so as to enable a business processing thread corresponding to the target storage position to process the event to be processed correspondingly; if the event to be processed is saved to the storage location 2, the corresponding service processing thread 2 performs corresponding service processing on the event to be processed in the storage location 2. The server may be an independent server or a server cluster formed by a plurality of servers. Therefore, by setting a plurality of storage positions and a plurality of corresponding service processing threads, the plurality of service processing threads can process the events to be processed in the corresponding storage positions in a parallel manner, so that the time delay of the events is greatly reduced, the timeliness and the processing efficiency of the events are improved, and the processing efficiency of the service is further improved.
Based on the application scenario architecture, one or more embodiments of the present disclosure provide a service processing method. Fig. 2 is a flow chart of a service processing method according to one or more embodiments of the present disclosure, where the method in fig. 2 can be executed by the server in fig. 1, and as shown in fig. 2, the method includes the following steps:
step S102, at least one event to be processed of a service to be processed is obtained;
the specific service type of the service to be processed can be set according to the requirement in practical application. As an example, the service to be processed is a smart phone call service, i.e. call processing is automatically performed when a set call time is reached based on a set calling phone number and at least one called phone number; accordingly, the event to be processed includes a channel setup event, a turn-on event, a ringing event, a voice conversion event, etc. As another example, the transaction to be processed is a transaction, i.e., a purchase transaction of a target commodity from a target transaction party (e.g., a shopping platform) based on transaction information submitted by a user; accordingly, the events to be processed include inventory verification events, order generation events, order confirmation events, payment events, and the like. The service types of the service to be processed and the corresponding event to be processed are not listed in the specification.
Step S104, distributing the event to be processed based on a plurality of preset storage positions to obtain a target storage position for storing the event to be processed;
in order to ensure that different events of the same service to be processed can be processed sequentially, in one or more embodiments of the present disclosure, each event stored in the storage area has a certain processing sequence, for example, a first stored event to be processed is processed preferentially.
And step S106, storing the event to be processed to a target storage position so as to enable the business processing thread corresponding to the target storage position to process the event to be processed correspondingly.
For example, the event to be processed is a channel establishment event, and the service processing thread corresponding to the target storage position performs channel establishment processing; for another example, the event to be processed is a ringing event, and the service processing thread corresponding to the target storage location performs ringing processing.
In one or more embodiments of the present disclosure, when a server obtains at least one event to be processed of a service to be processed, distributing the event to be processed based on a plurality of preset storage positions to obtain a target storage position for storing the event to be processed; and storing the event to be processed to a target storage position so as to enable the business processing thread corresponding to the target storage position to process the event to be processed correspondingly. Therefore, by setting a plurality of storage positions and a plurality of corresponding service processing threads, the plurality of service processing threads can process the events to be processed in the corresponding storage positions in a parallel manner, so that the time delay of the events is greatly reduced, the timeliness and the processing efficiency of the events are improved, and the processing efficiency of the service is further improved.
In view of the fact that the processing of some services is typically triggered based on user operations, in one or more embodiments of the present disclosure, a first application related to the processing of services is also provided, which is operable by a user to perform the processing of services. The first application may be installed in a server, and may also be installed in a terminal device of a user, where the terminal device is a mobile phone, a tablet computer, a desktop computer, a portable computer, etc. Specifically, when the first application is installed on the server, correspondingly, as shown in fig. 3, step S102 includes the following steps S102-2 and S102-4:
step S102-2, obtaining service processing information of a service to be processed;
specifically, the user edits the service processing information in an editing interface of the first application in the server, and clicks the submit control after the editing is completed. And when the service end detects the submitting operation of the user, acquiring service processing information. The service processing information is different according to the service to be processed, and can be set according to the needs in practical application. For example, the service to be processed is a smart phone call service, and the corresponding service processing information includes information such as calling phone number, called phone number, call content, call time, etc.; for another example, the service to be processed is a transaction service, and the service processing information includes user information, commodity information, target transaction party information, and the like.
Step S102-4, if the preset service processing conditions are met according to the service processing information, generating service identification information of the service to be processed, and generating at least one event to be processed of the service to be processed according to the service identification information.
The service processing conditions are different according to the different services to be processed, and can be set according to the needs in practical application. For example, the service to be processed is a smart phone call service, and if it is determined that the calling phone number and the called phone number included in the service processing information are both valid and reach the set call time, it is determined that the service processing condition is satisfied. For another example, the service to be processed is a transaction service, and if it is determined that the user information, the commodity information and the target transaction party information included in the service processing information are all valid, it is determined that the service processing condition is met, and the like.
Therefore, when the first application is installed on the server, the generation, the distribution, the processing and the like of the events to be processed are executed on the server, and the parallel processing of different events can be realized based on the multi-core performance of the server, so that the timeliness and the processing efficiency of the events are improved.
Further, when the first application is installed in the terminal device of the user, that is, the first user has a client, the client needs to interact with the server to complete the processing of the service to be processed, specifically, as shown in fig. 4, step S102 includes the following steps S102-6:
Step S102-6, receiving a service processing request sent by a client; wherein the service processing request comprises at least one event to be processed of the service to be processed; the service processing request is sent after determining that a preset service processing condition is met based on the acquired service processing information, and generating a to-be-processed event based on the generated service identification information of the to-be-processed service.
Specifically, the user edits the business processing information in the editing interface of the client, and clicks the submit control after the editing is completed. And when the client detects the submitting operation of the user, acquiring service processing information. If the acquired service processing information is determined to meet the preset service processing condition, generating service identification information of the service to be processed, generating at least one event to be processed of the service to be processed according to the service identification information, and sending a service processing request to the server according to the event to be processed.
Therefore, when the first application is installed on the terminal equipment of the user, the generation operation of the event to be processed is completed in the client side of the user, and the distribution and processing of the event to be processed are completed in the server side, so that the operation of the user is convenient, the parallel processing of different events can be realized based on the multi-core performance of the server side, and the timeliness and the processing efficiency of the event are improved.
In one or more embodiments of the present disclosure, the to-be-processed event carries service identification information of the to-be-processed service, and the to-be-processed event is distributed based on a plurality of preset storage positions according to the service identification information, so as to obtain a target storage position for storing the to-be-processed event. Specifically, as shown in fig. 5, step S104 includes:
step S104-2, calculating a hash value of service identification information carried by an event to be processed according to a first preset algorithm;
in one or more embodiments of the present disclosure, in order to facilitate subsequent calculation, a hash value of service identification information of a service to be processed carried by an event to be processed is calculated according to a first preset algorithm, in consideration of that some service identification information may include characters of a non-numeric type such as letters. The first preset algorithm can be set according to needs in practical application, for example, an MD5 algorithm, an SHA-1 algorithm and the like.
Step S104-4, determining a first number of preset storage positions;
specifically, the total number of preset storage locations is counted and used as the first number. It should be noted that the execution sequence of step S104-2 and step S104-4 may be interchanged.
Step S104-6, according to a second preset algorithm, calculating based on the hash value and the first quantity to obtain a calculation result;
step S104-8, a storage area number matched with the calculation result is determined, and a storage area corresponding to the matched storage area number is determined as a target storage area.
In order to ensure that the corresponding target storage area can be determined after the calculation process is performed based on different service identification information, in one or more embodiments of the present disclosure, the storage area number of each storage area is preset, and the storage area numbers are sequentially numbered from 0, for example, the storage area number 0, the storage area number 1, and the storage area number 2 … store the area number n-1, where n is the number of storage areas, that is, the first number. Accordingly, the second preset algorithm, such as modulo operation, remainder operation, etc., determines the "modulo" or "remainder" in the calculation result as the matched storage area number and determines the corresponding storage area as the target storage area, because the "modulo" or "remainder" in the calculation result is always smaller than the first number. It should be noted that when the service to be processed is an intelligent telephone call service, different numbers to be called correspond to different service identification information, i.e. each call corresponds to one service identification information.
Therefore, the event distribution processing is carried out based on the service identification information of the service to be processed, so that all the events to be processed of the same service to be processed can be stored in the same target storage area, and as each event in the target storage area has a certain execution sequence, different events of the same service to be processed can be sequentially processed, and further effective processing of the service to be processed is ensured. And because the distribution processing process does not involve event processing logic, only the transmission function is realized, the time consumption is almost negligible, and the service processing efficiency is not reduced. In addition, as the service identification information of each service to be processed is different, the probability that the events to be processed of different services to be processed are stored in different target storage areas is improved, so that different services to be processed can be processed in parallel, the time delay is reduced, and the processing efficiency of the service is improved.
Further, in order to improve the processing efficiency of the service, in one or more embodiments of the present disclosure, a plurality of actuators are preset, and parallel processing of different services is performed based on the plurality of actuators. Each executor comprises a first queue for storing the events to be processed and a service processing thread for correspondingly processing the events to be processed in the first queue, namely the first queue and the service processing thread have a one-to-one correspondence. Specifically, as shown in fig. 6, step S104 includes the following step S1042:
Step S1042, carrying out distribution processing on the event to be processed based on a plurality of preset executors to obtain a target executor for storing the event to be processed;
corresponding to step S1042, as shown in fig. 6, step S106 includes the following step S1062:
step S1062, storing the event to be processed in the first queue of the target executor, so that the service processing thread in the target executor processes the event to be processed in the corresponding first queue.
Since the queue has a first-in first-out property, by saving the events to be processed into the first queue, an orderly execution of the events in the first queue is ensured.
Further, when the event distribution processing is performed based on the actuator, as shown in fig. 7, the above step S104-4 includes the following steps S104-42:
step S104-42, determining a first number of preset actuators;
accordingly, as shown in FIG. 7, step S104-8 includes the following steps S104-82:
step S104-82, determining the number of the actuator matched with the calculation result, and determining the actuator corresponding to the matched number of the actuator as the target actuator.
Therefore, the method and the device have the advantages that the executors are arranged, namely, the thread pool is constructed, the events to be processed are distributed and processed, so that the events to be processed of different services to be processed are stored in the first queues of the corresponding target executors, the service processing threads in the target executors process the services to be processed in the corresponding first queues, the time delay of the events can be reduced, a plurality of events are processed in parallel, and the timeliness and the processing efficiency of the events are improved. In addition, when the service to be processed is an intelligent telephone call service, the number of call paths which can be supported by a single machine under the condition of the same server side is also improved.
The foregoing is a distributing process of the event to be processed based on the manner of the executor, in one or more embodiments of the present disclosure, a queue may be further defined, and the distributing process of the event to be processed may be performed based on the custom queue. The self-defined queue comprises a second queue and a plurality of chained structures, and each chained structure corresponds to one service processing thread; the second queue is used for storing the to-be-processed events of each to-be-processed service, the chained structure is used for storing the distributed to-be-processed events, and the service processing thread is used for processing the to-be-processed thread in the corresponding chained structure. Specifically, as shown in fig. 8, step S104 includes the following step S1044:
step S1044, storing the event to be processed in a preset second queue, and performing event distribution processing based on a plurality of chain structures corresponding to the second queue to obtain a target chain structure for storing the event to be processed;
corresponding to step S1044, as shown in fig. 8, step S106 includes the following step S1064:
step S1064, the event to be processed is obtained from the second queue, and the obtained event to be processed is saved in the target chain structure, so that the service processing thread corresponding to the target chain structure processes the event to be processed in the target chain structure correspondingly.
Further, when the event to be processed is distributed based on the custom queue, as shown in fig. 9, the step S104-4 may include the following steps S104-44:
step S104-44, determining a first number of preset multiple chain structures;
accordingly, as shown in FIG. 9, step S104-8 includes the following steps S104-84:
step S104-84, determining a chain structure number matched with the calculation result, and determining a chain structure corresponding to the matched chain structure number as a target chain structure.
Therefore, the events to be processed of each service are stored in the second queue, and the queue has the characteristic of first-in first-out, so that distribution processing can be sequentially performed based on the sequence of storing the events to be processed, and ordered distribution of the events is ensured. Meanwhile, the to-be-processed events of different to-be-processed services are stored in the corresponding target chain structures, so that the service processing threads corresponding to the target chain structures process the to-be-processed events in the target chain structures, the time delay of the events can be reduced, a plurality of events are processed in parallel, and the timeliness and the processing efficiency of the events are improved.
In one or more embodiments of the present disclosure, when a server obtains at least one event to be processed of a service to be processed, distributing the event to be processed based on a plurality of preset storage positions to obtain a target storage position for storing the event to be processed; and storing the event to be processed to a target storage position so as to enable the business processing thread corresponding to the target storage position to process the event to be processed correspondingly. Therefore, by setting a plurality of storage positions and a plurality of corresponding service processing threads, the plurality of service processing threads can process the events to be processed in the corresponding storage positions in a parallel manner, so that the time delay of the events is greatly reduced, the timeliness and the processing efficiency of the events are improved, and the processing efficiency of the service is further improved.
Corresponding to the above-described service processing method, one or more embodiments of the present disclosure further provide another service processing method, which is applied to the client, based on the same technical concept. Fig. 10 is a flow chart of another service processing method according to one or more embodiments of the present disclosure, as shown in fig. 10, the method includes the following steps:
step S202, obtaining service processing information of a service to be processed;
specifically, the user edits the business processing information in the editing interface of the client, and clicks the submit control after the editing is completed. And when the client detects the submitting operation of the user, acquiring service processing information.
Step S204, if the preset service processing condition is met according to the acquired service processing information, generating at least one event to be processed of the service to be processed;
specifically, if the preset service processing condition is met according to the acquired service processing information, generating service identification information of the service to be processed, and generating at least one event to be processed of the service to be processed according to the service identification information.
Step S206, a service processing request is sent to the server according to the event to be processed, so that the server distributes the event to be processed based on a plurality of preset storage positions to obtain a target storage position for storing the event to be processed, the event to be processed is stored to the target storage position, and the service processing thread corresponding to the target storage position correspondingly processes the event to be processed.
In one or more embodiments of the present disclosure, a client sends a service processing request to a server according to a generated event to be processed of a service to distribute the event to be processed based on a plurality of preset storage positions, so that a target storage position for storing the event to be processed is obtained, the event to be processed is stored in the target storage position, and a service processing thread corresponding to the target storage position performs corresponding processing on the event to be processed. Therefore, by setting a plurality of storage positions and a plurality of corresponding service processing threads, the plurality of service processing threads can process the events to be processed in the corresponding storage positions in a parallel manner, so that the time delay of the events is greatly reduced, the timeliness and the processing efficiency of the events are improved, and the processing efficiency of the service is further improved.
Corresponding to the service processing methods described in fig. 2 to 9, one or more embodiments of the present disclosure further provide a service processing apparatus based on the same technical concept. Fig. 11 is a schematic block diagram of a service processing apparatus according to one or more embodiments of the present disclosure, where the apparatus is configured to perform the service processing method described in fig. 2 to 9, and as shown in fig. 11, the apparatus includes:
An acquiring module 301, configured to acquire at least one to-be-processed event of a to-be-processed service;
the distribution module 302 is configured to perform distribution processing on the event to be processed based on a plurality of preset storage locations, so as to obtain a target storage location for storing the event to be processed;
and the storage module 303 is configured to store the event to be processed to the target storage location, so that the service processing thread corresponding to the target storage location performs corresponding processing on the event to be processed.
Optionally, the event to be processed carries service identification information of the service to be processed; the distribution module 302 is specifically configured to:
according to a first preset algorithm, calculating a hash value of the service identification information; the method comprises the steps of,
determining a first number of the preset plurality of storage locations;
according to a second preset algorithm, calculating based on the hash value and the first quantity to obtain a calculation result;
and determining a storage area number matched with the calculation result, and determining a storage area corresponding to the matched storage area number as a target storage area.
Optionally, the acquiring module 301 is specifically configured to:
acquiring service processing information of the service to be processed;
And if the preset service processing condition is met according to the service processing information, generating service identification information of the service to be processed, and generating at least one event to be processed of the service to be processed according to the service identification information.
Optionally, the acquiring module 301 is specifically configured to:
receiving a service processing request sent by a client; wherein the service processing request includes at least one event to be processed of a service to be processed; the service processing request is sent after determining that a preset service processing condition is met based on the acquired service processing information, and generating the event to be processed based on the generated service identification information of the service to be processed.
Optionally, the distributing module 302 is specifically configured to:
and distributing the event to be processed based on a plurality of preset executors to obtain a target executor for storing the event to be processed.
Optionally, the executor includes a first queue and a service processing thread, and correspondingly, the saving module 303 is specifically configured to:
and storing the event to be processed to the first queue in the target executor so that the service processing thread in the target executor can correspondingly process the event to be processed in the first queue.
Optionally, the distributing module 302 is specifically configured to:
determining a first number of the preset plurality of actuators; the method comprises the steps of,
determining an actuator number matched with the calculation result, and determining an actuator corresponding to the matched actuator number as a target actuator.
Optionally, the distributing module 302 is specifically configured to:
storing the event to be processed into a preset second queue, and performing event distribution processing based on a plurality of chain structures corresponding to the second queue to obtain a target chain structure for storing the event to be processed;
accordingly, the saving module 303 is specifically configured to:
acquiring the event to be processed from the second queue;
and storing the acquired event to be processed into the target chain structure so as to enable the business processing thread corresponding to the target chain structure to process the event to be processed in the target chain structure correspondingly.
Optionally, the distributing module 302 is specifically configured to:
determining a first number of the preset plurality of chain structures; the method comprises the steps of,
and determining a chain structure number matched with the calculation result, and determining a chain structure corresponding to the matched chain structure number as a target chain structure.
When at least one event to be processed of a service to be processed is acquired, the service processing device provided by one or more embodiments of the present disclosure distributes the event to be processed based on a plurality of preset storage positions to obtain a target storage position for storing the event to be processed; and storing the event to be processed to a target storage position so as to enable the business processing thread corresponding to the target storage position to process the event to be processed correspondingly. Therefore, by setting a plurality of storage positions and a plurality of corresponding service processing threads, the plurality of service processing threads can process the events to be processed in the corresponding storage positions in a parallel manner, so that the time delay of the events is greatly reduced, the timeliness and the processing efficiency of the events are improved, and the processing efficiency of the service is further improved.
It should be noted that, in the present specification, the embodiment about the service processing apparatus and the embodiment about the service processing method in the present specification are based on the same inventive concept, so the specific implementation of this embodiment may refer to the implementation of the foregoing corresponding service processing method, and the repetition is not repeated.
Further, according to the service processing method described in fig. 10, one or more embodiments of the present disclosure further provide another service processing apparatus based on the same technical concept. Fig. 12 is a schematic block diagram of another service processing apparatus according to one or more embodiments of the present disclosure, where the apparatus is configured to perform the service processing method described in fig. 10, and as shown in fig. 12, the apparatus includes:
An acquiring module 401, configured to acquire service processing information of a service to be processed;
a generating module 402, configured to generate at least one to-be-processed event of the to-be-processed service if it is determined that a preset service processing condition is met according to the service processing information;
and the sending module 403 is configured to send a service processing request to a server according to the event to be processed, so that the server distributes the event to be processed based on a plurality of preset storage positions, obtains a target storage position for storing the event to be processed, stores the event to be processed in the target storage position, and causes a service processing thread corresponding to the target storage position to perform corresponding processing on the event to be processed.
Optionally, the generating module 402 is specifically configured to:
generating service identification information of the service to be processed;
and generating at least one event to be processed of the service to be processed according to the service identification information.
According to the service processing device provided by one or more embodiments of the present disclosure, a service processing request is sent to a server according to a generated event to be processed of a service to enable the server to distribute and process the event to be processed based on a plurality of preset storage positions, to obtain a target storage position for storing the event to be processed, and the event to be processed is stored in the target storage position, so that a service processing thread corresponding to the target storage position processes the event to be processed correspondingly. Therefore, by setting a plurality of storage positions and a plurality of corresponding service processing threads, the plurality of service processing threads can process the events to be processed in the corresponding storage positions in a parallel manner, so that the time delay of the events is greatly reduced, the timeliness and the processing efficiency of the events are improved, and the processing efficiency of the service is further improved.
It should be noted that, in the present specification, the embodiment about the service processing apparatus and the embodiment about the service processing method in the present specification are based on the same inventive concept, so the specific implementation of this embodiment may refer to the implementation of the foregoing corresponding service processing method, and the repetition is not repeated.
Further, according to the above-described service processing method, based on the same technical concept, one or more embodiments of the present disclosure further provide a service processing device, where the service processing device is configured to perform the above-described service processing method, and fig. 13 is a schematic structural diagram of a service processing device provided by one or more embodiments of the present disclosure.
As shown in fig. 13, the service processing device may have a relatively large difference due to different configurations or performances, and may include one or more processors 501 and a memory 502, where the memory 502 may store one or more storage applications or data. Wherein the memory 502 may be transient storage or persistent storage. The application programs stored in memory 502 may include one or more modules (not shown) each of which may include a series of computer executable instructions in the business processing device. Still further, the processor 501 may be configured to communicate with the memory 502 and execute a series of computer executable instructions in the memory 502 on the service processing device. The traffic handling device may also include one or more power supplies 503, one or more wired or wireless network interfaces 504, one or more input/output interfaces 505, one or more keyboards 506, etc.
In a particular embodiment, a business processing device includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer executable instructions for the business processing device, and configured to be executed by one or more processors, the one or more programs comprising computer executable instructions for:
acquiring at least one event to be processed of a service to be processed;
distributing the event to be processed based on a plurality of preset storage positions to obtain a target storage position for storing the event to be processed;
and storing the event to be processed to the target storage position so that the business processing thread corresponding to the target storage position carries out corresponding processing on the event to be processed.
Optionally, when the computer executable instructions are executed, the event to be processed carries service identification information of the service to be processed;
the distributing processing is performed on the event to be processed based on a plurality of preset storage positions to obtain a target storage position for storing the event to be processed, and the distributing processing comprises the following steps:
According to a first preset algorithm, calculating a hash value of the service identification information;
determining a first number of the preset plurality of storage locations;
according to a second preset algorithm, calculating based on the hash value and the first quantity to obtain a calculation result;
and determining a storage area number matched with the calculation result, and determining a storage area corresponding to the matched storage area number as a target storage area.
Optionally, the acquiring at least one pending event of the pending service when executed comprises:
acquiring service processing information of the service to be processed;
and if the preset service processing condition is met according to the service processing information, generating service identification information of the service to be processed, and generating at least one event to be processed of the service to be processed according to the service identification information.
Optionally, the acquiring at least one pending event of the pending service when executed comprises:
receiving a service processing request sent by a client; wherein the service processing request includes at least one event to be processed of a service to be processed; the service processing request is sent after determining that a preset service processing condition is met based on the acquired service processing information, and generating the event to be processed based on the generated service identification information of the service to be processed.
Optionally, when the computer executable instructions are executed, the distributing the to-be-processed event based on a plurality of preset storage locations to obtain a target storage location for storing the to-be-processed event, including:
and distributing the event to be processed based on a plurality of preset executors to obtain a target executor for storing the event to be processed.
Optionally, the computer executable instructions, when executed, the executor comprises: a first queue and a business processing thread;
the step of saving the event to be processed to the target storage position so that the service processing thread corresponding to the target storage position carries out corresponding processing on the event to be processed, including:
and storing the event to be processed to the first queue in the target executor so that the service processing thread in the target executor can correspondingly process the event to be processed in the first queue.
Optionally, the determining the first number of the preset plurality of storage locations when the computer executable instructions are executed includes:
determining a first number of the preset plurality of actuators;
The determining the storage area number matched with the calculation result, and determining the storage area corresponding to the matched storage area number as a target storage area includes:
determining an actuator number matched with the calculation result, and determining an actuator corresponding to the matched actuator number as a target actuator.
Optionally, when the computer executable instructions are executed, the distributing the to-be-processed event based on a plurality of preset storage locations to obtain a target storage location for storing the to-be-processed event, including:
storing the event to be processed into a preset second queue, and performing event distribution processing based on a plurality of chain structures corresponding to the second queue to obtain a target chain structure for storing the event to be processed;
the step of saving the event to be processed to the target storage position so that the service processing thread corresponding to the target storage position carries out corresponding processing on the event to be processed, including:
acquiring the event to be processed from the second queue;
and storing the acquired event to be processed into the target chain structure so as to enable the business processing thread corresponding to the target chain structure to process the event to be processed in the target chain structure correspondingly.
Optionally, the determining the first number of the preset plurality of storage locations when the computer executable instructions are executed includes:
determining a first number of the preset plurality of chain structures;
the determining the storage area number matched with the calculation result, and determining the first storage area corresponding to the matched storage area number as the target storage area includes:
and determining a chain structure number matched with the calculation result, and determining a chain structure corresponding to the matched chain structure number as a target chain structure.
The service processing device provided by one or more embodiments of the present disclosure, when obtaining at least one event to be processed of a service to be processed, performs distribution processing on the event to be processed based on a plurality of preset storage positions, to obtain a target storage position for storing the event to be processed; and storing the event to be processed to a target storage position so as to enable the business processing thread corresponding to the target storage position to process the event to be processed correspondingly. Therefore, by setting a plurality of storage positions and a plurality of corresponding service processing threads, the plurality of service processing threads can process the events to be processed in the corresponding storage positions in a parallel manner, so that the time delay of the events is greatly reduced, the timeliness and the processing efficiency of the events are improved, and the processing efficiency of the service is further improved.
In another particular embodiment, a business processing device includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer executable instructions for the business processing device, and configured to be executed by one or more processors, the one or more programs comprising computer executable instructions for:
acquiring service processing information of a service to be processed;
if the preset service processing conditions are met according to the service processing information, generating at least one event to be processed of the service to be processed;
and sending a service processing request to a server according to the event to be processed, so that the server distributes the event to be processed based on a plurality of preset storage positions to obtain a target storage position for storing the event to be processed, and storing the event to be processed to the target storage position, so that a service processing thread corresponding to the target storage position carries out corresponding processing on the event to be processed.
Optionally, the generating at least one pending event for the pending service when executed comprises:
generating service identification information of the service to be processed;
and generating at least one event to be processed of the service to be processed according to the service identification information.
According to the service processing equipment provided by one or more embodiments of the present disclosure, a service processing request is sent to a server according to a generated event to be processed of a service to enable the server to distribute and process the event to be processed based on a plurality of preset storage positions, to obtain a target storage position for storing the event to be processed, and the event to be processed is stored in the target storage position, so that a service processing thread corresponding to the target storage position processes the event to be processed correspondingly. Therefore, by setting a plurality of storage positions and a plurality of corresponding service processing threads, the plurality of service processing threads can process the events to be processed in the corresponding storage positions in a parallel manner, so that the time delay of the events is greatly reduced, the timeliness and the processing efficiency of the events are improved, and the processing efficiency of the service is further improved.
It should be noted that, in the present specification, the embodiment about the service processing apparatus and the embodiment about the service processing method in the present specification are based on the same inventive concept, so the specific implementation of this embodiment may refer to the implementation of the foregoing corresponding service processing method, and the repetition is not repeated.
Further, in accordance with the above-described service processing method, based on the same technical concept, one or more embodiments of the present disclosure further provide a storage medium, which is used to store computer executable instructions, and in a specific embodiment, the storage medium may be a U disc, an optical disc, a hard disk, etc., where the computer executable instructions stored in the storage medium can implement the following flow when executed by a processor:
acquiring at least one event to be processed of a service to be processed;
distributing the event to be processed based on a plurality of preset storage positions to obtain a target storage position for storing the event to be processed;
and storing the event to be processed to the target storage position so that the business processing thread corresponding to the target storage position carries out corresponding processing on the event to be processed.
Optionally, the computer executable instructions stored in the storage medium, when executed by the processor, carry service identification information of the service to be processed;
the distributing processing is performed on the event to be processed based on a plurality of preset storage positions to obtain a target storage position for storing the event to be processed, and the distributing processing comprises the following steps:
According to a first preset algorithm, calculating a hash value of the service identification information;
determining a first number of the preset plurality of storage locations;
according to a second preset algorithm, calculating based on the hash value and the first quantity to obtain a calculation result;
and determining a storage area number matched with the calculation result, and determining a storage area corresponding to the matched storage area number as a target storage area.
Optionally, the computer executable instructions stored on the storage medium, when executed by the processor, obtain at least one pending event of the pending service, including:
acquiring service processing information of the service to be processed;
and if the preset service processing condition is met according to the service processing information, generating service identification information of the service to be processed, and generating at least one event to be processed of the service to be processed according to the service identification information.
Optionally, the computer executable instructions stored on the storage medium, when executed by the processor, obtain at least one pending event of the pending service, including:
receiving a service processing request sent by a client; wherein the service processing request includes at least one event to be processed of a service to be processed; the service processing request is sent after determining that a preset service processing condition is met based on the acquired service processing information, and generating the event to be processed based on the generated service identification information of the service to be processed.
Optionally, the computer executable instructions stored in the storage medium, when executed by the processor, perform distribution processing on the to-be-processed event based on a plurality of preset storage locations, to obtain a target storage location for storing the to-be-processed event, including:
and distributing the event to be processed based on a plurality of preset executors to obtain a target executor for storing the event to be processed.
Optionally, the storage medium stores computer executable instructions that, when executed by the processor, the executor comprises: a first queue and a business processing thread;
the step of saving the event to be processed to the target storage position so that the service processing thread corresponding to the target storage position carries out corresponding processing on the event to be processed, including:
and storing the event to be processed to the first queue in the target executor so that the service processing thread in the target executor can correspondingly process the event to be processed in the first queue.
Optionally, the computer executable instructions stored on the storage medium, when executed by the processor, determine the first number of the preset plurality of storage locations, comprising:
Determining a first number of the preset plurality of actuators;
the determining the storage area number matched with the calculation result, and determining the storage area corresponding to the matched storage area number as a target storage area includes:
determining an actuator number matched with the calculation result, and determining an actuator corresponding to the matched actuator number as a target actuator.
Optionally, the computer executable instructions stored in the storage medium, when executed by the processor, perform distribution processing on the to-be-processed event based on a plurality of preset storage locations, to obtain a target storage location for storing the to-be-processed event, including:
storing the event to be processed into a preset second queue, and performing event distribution processing based on a plurality of chain structures corresponding to the second queue to obtain a target chain structure for storing the event to be processed;
the step of saving the event to be processed to the target storage position so that the service processing thread corresponding to the target storage position carries out corresponding processing on the event to be processed, including:
acquiring the event to be processed from the second queue;
And storing the acquired event to be processed into the target chain structure so as to enable the business processing thread corresponding to the target chain structure to process the event to be processed in the target chain structure correspondingly.
Optionally, the computer executable instructions stored on the storage medium, when executed by the processor, determine the first number of the preset plurality of storage locations, comprising:
determining a first number of the preset plurality of chain structures;
the determining the storage area number matched with the calculation result, and determining the first storage area corresponding to the matched storage area number as the target storage area includes:
and determining a chain structure number matched with the calculation result, and determining a chain structure corresponding to the matched chain structure number as a target chain structure.
When the computer executable instructions stored in the storage medium provided by one or more embodiments of the present disclosure are executed by the processor, when at least one event to be processed of a service to be processed is acquired, distributing the event to be processed based on a plurality of preset storage positions, so as to obtain a target storage position for storing the event to be processed; and storing the event to be processed to a target storage position so as to enable the business processing thread corresponding to the target storage position to process the event to be processed correspondingly. Therefore, by setting a plurality of storage positions and a plurality of corresponding service processing threads, the plurality of service processing threads can process the events to be processed in the corresponding storage positions in a parallel manner, so that the time delay of the events is greatly reduced, the timeliness and the processing efficiency of the events are improved, and the processing efficiency of the service is further improved.
In another specific embodiment, the storage medium may be a usb disk, an optical disk, a hard disk, or the like, where the computer executable instructions stored in the storage medium when executed by the processor implement the following procedures:
acquiring service processing information of a service to be processed;
if the preset service processing conditions are met according to the service processing information, generating at least one event to be processed of the service to be processed;
and sending a service processing request to a server according to the event to be processed, so that the server distributes the event to be processed based on a plurality of preset storage positions to obtain a target storage position for storing the event to be processed, and storing the event to be processed to the target storage position, so that a service processing thread corresponding to the target storage position carries out corresponding processing on the event to be processed.
When the computer executable instructions stored in the storage medium provided by one or more embodiments of the present disclosure are executed by a processor, a service processing request is sent to a server according to a generated to-be-processed event of a to-be-processed service, so that the server performs distribution processing on the to-be-processed event based on a plurality of preset storage positions, obtains a target storage position for storing the to-be-processed event, stores the to-be-processed event in the target storage position, and enables a service processing thread corresponding to the target storage position to perform corresponding processing on the to-be-processed event. Therefore, by setting a plurality of storage positions and a plurality of corresponding service processing threads, the plurality of service processing threads can process the events to be processed in the corresponding storage positions in a parallel manner, so that the time delay of the events is greatly reduced, the timeliness and the processing efficiency of the events are improved, and the processing efficiency of the service is further improved.
It should be noted that, in the present specification, the embodiment about the storage medium and the embodiment about the service processing method in the present specification are based on the same inventive concept, so the specific implementation of this embodiment may refer to the implementation of the foregoing corresponding service processing method, and the repetition is not repeated.
It should be noted that, in the present specification, the embodiment about the storage medium and the embodiment about the interface creation method in the present specification are based on the same inventive concept, so the specific implementation of this embodiment may refer to the implementation of the foregoing corresponding interface creation method, and the repetition is not repeated.
One skilled in the relevant art will recognize that one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present description is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
One or more embodiments of the present specification may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing description is by way of example only and is not intended to limit the present disclosure. Various modifications and changes may occur to those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. that fall within the spirit and principles of the present document are intended to be included within the scope of the claims of the present document.

Claims (16)

1. The service processing method is characterized by being applied to a server and comprising the following steps:
acquiring at least one event to be processed of a service to be processed;
distributing the event to be processed based on a plurality of preset storage positions to obtain a target storage position for storing the event to be processed;
storing the event to be processed to the target storage position so that a business processing thread corresponding to the target storage position carries out corresponding processing on the event to be processed;
The event to be processed carries service identification information of the service to be processed;
the distributing processing is performed on the event to be processed based on a plurality of preset storage positions to obtain a target storage position for storing the event to be processed, and the distributing processing comprises the following steps:
according to a first preset algorithm, calculating a hash value of the service identification information;
determining a first number of the preset plurality of storage locations;
according to a second preset algorithm, calculating based on the hash value and the first quantity to obtain a calculation result;
and determining a storage area number matched with the calculation result, and determining a storage area corresponding to the matched storage area number as a target storage area.
2. The method of claim 1, wherein the acquiring at least one pending event for a pending service comprises:
acquiring service processing information of the service to be processed;
and if the preset service processing condition is met according to the service processing information, generating service identification information of the service to be processed, and generating at least one event to be processed of the service to be processed according to the service identification information.
3. The method of claim 1, wherein the acquiring at least one pending event for a pending service comprises:
Receiving a service processing request sent by a client; wherein the service processing request includes at least one event to be processed of a service to be processed; the service processing request is sent after determining that a preset service processing condition is met based on the acquired service processing information, and generating the event to be processed based on the generated service identification information of the service to be processed.
4. The method according to claim 1, wherein the distributing the event to be processed based on a plurality of preset storage locations to obtain a target storage location for storing the event to be processed includes:
and distributing the event to be processed based on a plurality of preset executors to obtain a target executor for storing the event to be processed.
5. The method of claim 4, the actuator comprising: a first queue and a business processing thread;
the step of saving the event to be processed to the target storage position so that the service processing thread corresponding to the target storage position carries out corresponding processing on the event to be processed, including:
and storing the event to be processed to the first queue in the target executor so that the service processing thread in the target executor can correspondingly process the event to be processed in the first queue.
6. The method of claim 4, wherein the determining the first number of the preset plurality of storage locations comprises:
determining a first number of the preset plurality of actuators;
the determining the storage area number matched with the calculation result, and determining the storage area corresponding to the matched storage area number as a target storage area includes: determining an actuator number matched with the calculation result, and determining an actuator corresponding to the matched actuator number as a target actuator.
7. The method according to claim 1, wherein the distributing the event to be processed based on the preset plurality of storage locations, to obtain a target storage location for storing the event to be processed, includes:
storing the event to be processed into a preset second queue, and performing event distribution processing based on a plurality of chain structures corresponding to the second queue to obtain a target chain structure for storing the event to be processed;
the step of saving the event to be processed to the target storage position so that the service processing thread corresponding to the target storage position carries out corresponding processing on the event to be processed, including:
Acquiring the event to be processed from the second queue;
and storing the acquired event to be processed into the target chain structure so as to enable the business processing thread corresponding to the target chain structure to process the event to be processed in the target chain structure correspondingly.
8. The method of claim 7, the determining the first number of the preset plurality of storage locations comprising:
determining a first number of the preset plurality of chain structures;
the determining the storage area number matched with the calculation result, and determining the first storage area corresponding to the matched storage area number as the target storage area includes:
and determining a chain structure number matched with the calculation result, and determining a chain structure corresponding to the matched chain structure number as a target chain structure.
9. A service processing method applied to a client, comprising:
acquiring service processing information of a service to be processed;
if the preset service processing conditions are met according to the service processing information, generating at least one event to be processed of the service to be processed;
sending a service processing request to a server according to the event to be processed, so that the server distributes the event to be processed based on a plurality of preset storage positions to obtain a target storage position for storing the event to be processed,
Storing the event to be processed to the target storage position so that a business processing thread corresponding to the target storage position carries out corresponding processing on the event to be processed;
the event to be processed carries service identification information of the service to be processed;
the distributing processing is performed on the event to be processed based on a plurality of preset storage positions to obtain a target storage position for storing the event to be processed, and the distributing processing comprises the following steps:
according to a first preset algorithm, calculating a hash value of the service identification information;
determining a first number of the preset plurality of storage locations;
according to a second preset algorithm, calculating based on the hash value and the first quantity to obtain a calculation result;
and determining a storage area number matched with the calculation result, and determining a storage area corresponding to the matched storage area number as a target storage area.
10. The method of claim 9, wherein the generating at least one pending event for the pending service comprises:
generating the service identification information of the service to be processed;
and generating at least one event to be processed of the service to be processed according to the service identification information.
11. A service processing apparatus, applied to a server, comprising:
the acquisition module is used for acquiring at least one event to be processed of the service to be processed;
the distribution module is used for carrying out distribution processing on the event to be processed based on a plurality of preset storage positions to obtain a target storage position for storing the event to be processed;
the storage module is used for storing the event to be processed to the target storage position so that the business processing thread corresponding to the target storage position can process the event to be processed correspondingly;
the event to be processed carries service identification information of the service to be processed, and the distribution module is specifically configured to:
according to a first preset algorithm, calculating a hash value of the service identification information;
determining a first number of the preset plurality of storage locations;
according to a second preset algorithm, calculating based on the hash value and the first quantity to obtain a calculation result;
and determining a storage area number matched with the calculation result, and determining a storage area corresponding to the matched storage area number as a target storage area.
12. The apparatus of claim 11, wherein the distribution module is specifically configured to:
And distributing the event to be processed based on a plurality of preset executors to obtain a target executor for storing the event to be processed.
13. The apparatus of claim 11, wherein the distribution module is specifically configured to:
storing the event to be processed into a preset second queue, and performing event distribution processing based on a plurality of chain structures corresponding to the second queue to obtain a target chain structure for storing the event to be processed;
the storage module is specifically configured to obtain the event to be processed from the second queue; and storing the acquired event to be processed into the target chain structure so as to enable the business processing thread corresponding to the target chain structure to process the event to be processed correspondingly.
14. A service processing apparatus, applied to a client, comprising:
the acquisition module is used for acquiring service processing information of the service to be processed;
the generating module is used for generating at least one event to be processed of the service to be processed if the preset service processing condition is determined to be met according to the service processing information;
the sending module is used for sending a service processing request to a server according to the event to be processed, so that the server distributes the event to be processed based on a plurality of preset storage positions to obtain a target storage position for storing the event to be processed, and storing the event to be processed to the target storage position, so that a service processing thread corresponding to the target storage position carries out corresponding processing on the event to be processed;
The event to be processed carries service identification information of the service to be processed;
the distributing processing is performed on the event to be processed based on a plurality of preset storage positions to obtain a target storage position for storing the event to be processed, and the distributing processing comprises the following steps:
according to a first preset algorithm, calculating a hash value of the service identification information;
determining a first number of the preset plurality of storage locations;
according to a second preset algorithm, calculating based on the hash value and the first quantity to obtain a calculation result;
and determining a storage area number matched with the calculation result, and determining a storage area corresponding to the matched storage area number as a target storage area.
15. A service processing apparatus, comprising: a processor, and a memory arranged to store computer executable instructions; the computer executable instructions, when executed, cause the processor to perform the steps of the method of any of the preceding claims 1 to 8 or to perform the steps of the method of any of the preceding claims 9 to 10.
16. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the method of any of the preceding claims 1 to 8 or the steps of the method of any of the preceding claims 9 to 10.
CN202010839637.5A 2020-08-19 2020-08-19 Service processing method, device, equipment and storage medium Active CN112114976B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010839637.5A CN112114976B (en) 2020-08-19 2020-08-19 Service processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010839637.5A CN112114976B (en) 2020-08-19 2020-08-19 Service processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112114976A CN112114976A (en) 2020-12-22
CN112114976B true CN112114976B (en) 2024-03-22

Family

ID=73804201

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010839637.5A Active CN112114976B (en) 2020-08-19 2020-08-19 Service processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112114976B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113034178A (en) * 2021-03-15 2021-06-25 深圳市麦谷科技有限公司 Multi-system integral calculation method and device, terminal equipment and storage medium
CN113760575A (en) * 2021-07-27 2021-12-07 广州虎牙信息科技有限公司 Event processing method and device, terminal and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6999992B1 (en) * 2000-10-04 2006-02-14 Microsoft Corporation Efficiently sending event notifications over a computer network
CN108733496A (en) * 2017-04-24 2018-11-02 腾讯科技(上海)有限公司 Event-handling method and device
WO2019019853A1 (en) * 2017-07-27 2019-01-31 华为技术有限公司 Data processing method, terminal device, and network device
CN109684047A (en) * 2018-08-21 2019-04-26 平安普惠企业管理有限公司 Event-handling method, device, equipment and computer storage medium
CN110869968A (en) * 2017-03-17 2020-03-06 融文新闻国际控股有限公司 Event processing system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6999992B1 (en) * 2000-10-04 2006-02-14 Microsoft Corporation Efficiently sending event notifications over a computer network
CN110869968A (en) * 2017-03-17 2020-03-06 融文新闻国际控股有限公司 Event processing system
CN108733496A (en) * 2017-04-24 2018-11-02 腾讯科技(上海)有限公司 Event-handling method and device
WO2019019853A1 (en) * 2017-07-27 2019-01-31 华为技术有限公司 Data processing method, terminal device, and network device
CN109684047A (en) * 2018-08-21 2019-04-26 平安普惠企业管理有限公司 Event-handling method, device, equipment and computer storage medium

Also Published As

Publication number Publication date
CN112114976A (en) 2020-12-22

Similar Documents

Publication Publication Date Title
EP3886403B1 (en) Block chain service acceptance and consensus method and device
WO2019056640A1 (en) Order processing method and device
CN112114976B (en) Service processing method, device, equipment and storage medium
CN110768912A (en) API gateway current limiting method and device
CN111899008B (en) Resource transfer method, device, equipment and system
CN110928905B (en) Data processing method and device
CN107908680A (en) Management method, electronic device and the computer-readable recording medium of wechat public platform
CN106921712B (en) Service processing method and device
CN112488688B (en) Transaction processing method, device, equipment and storage medium based on blockchain
CN112256647B (en) File processing method and device
CN110764930B (en) Request or response processing method and device based on message mode
CN106657182B (en) Cloud file processing method and device
CN104144202A (en) Hadoop distributed file system access method, system and device
CN111461583B (en) Inventory checking method and device
CN112181378A (en) Method and device for realizing business process
CN113127225A (en) Method, device and system for scheduling data processing tasks
CN107295052B (en) Service processing method and device
CN109376020B (en) Data processing method, device and storage medium under multi-block chain interaction concurrence
US8510426B2 (en) Communication and coordination between web services in a cloud-based computing environment
CN111832862B (en) Flow management method and system based on block chain
CN110413427B (en) Subscription data pulling method, device, equipment and storage medium
CN112306695A (en) Data processing method and device, electronic equipment and computer storage medium
CN116886626A (en) Service data flow limiting method and device, computer equipment and storage medium
CN116775759A (en) Multi-platform big data synchronization method, equipment and medium based on message queue
CN114327818B (en) Algorithm scheduling method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant