CN115914405A - Service processing method and device - Google Patents

Service processing method and device Download PDF

Info

Publication number
CN115914405A
CN115914405A CN202211528181.6A CN202211528181A CN115914405A CN 115914405 A CN115914405 A CN 115914405A CN 202211528181 A CN202211528181 A CN 202211528181A CN 115914405 A CN115914405 A CN 115914405A
Authority
CN
China
Prior art keywords
service
node
target
rule
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211528181.6A
Other languages
Chinese (zh)
Inventor
苏巧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202211528181.6A priority Critical patent/CN115914405A/en
Publication of CN115914405A publication Critical patent/CN115914405A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The present specification provides a service processing method and device, where the method includes: receiving a service processing request sent by an upstream node aiming at any service event; under the condition that the business processing request is matched with a temporary drainage rule for calling the target service, sending a service calling request aiming at the target service to a target downstream node defined by the temporary drainage rule at least; receiving a first response message returned by a target downstream node, and adding a drainage mark in a second response message returned by an upstream node under the condition that the first response message hits a temporary drainage rule, so that other service calling requests for target services subsequently generated by any business event are at least sent to the target downstream node; and under the condition that the service processing request is not matched with all the temporary drainage rules but is matched with the original drainage rule for calling the target service, sending a service calling request aiming at the target service to an original downstream node defined by the original drainage rule.

Description

Service processing method and device
Technical Field
The embodiment of the specification belongs to the technical field of traffic management, and particularly relates to a service processing method and device.
Background
In the related art, the business process roughly includes: the service initiating party sends a service processing request to a certain service node according to a service event, and the service node can send a service calling request to other service nodes when processing the received service processing request so as to call the services of other service nodes.
However, if the call relations between the service nodes need to be adjusted temporarily, the related art can only adjust each call relation individually, which is not only complicated in operation but also inefficient.
Disclosure of Invention
The present specification aims to provide a service processing method and device.
According to a first aspect of one or more embodiments of the present specification, a service processing method is provided, which is applied to a target service node, where the target service node maintains an original drainage rule and a temporary drainage rule issued by a drainage tube controller, and the method includes:
receiving a service processing request sent by an upstream node aiming at any service event;
in the case that the business processing request is matched with a temporary drainage rule for calling a target service, sending a service calling request aiming at the target service at least to a target downstream node defined by the temporary drainage rule so as to be processed by the target downstream node through the target service; receiving a first response message returned by the target downstream node, and adding a drainage mark in a second response message returned by the upstream node under the condition that the first response message hits the temporary drainage rule, so that other service calling requests for the target service, which are generated subsequently by any business event, are at least sent to the target downstream node;
and under the condition that the service processing request is not matched with all temporary drainage rules but is matched with an original drainage rule for calling the target service, sending a service calling request aiming at the target service to an original downstream node defined by the original drainage rule so as to be processed by the original downstream node through the target service.
According to a second aspect of one or more embodiments of the present specification, a service processing apparatus is applied to a target service node, where the target service node maintains an original drainage rule and a temporary drainage rule issued by a drainage tube controller, and the apparatus includes:
a receiving unit, configured to receive a service processing request sent by an upstream node for any service event;
a first sending unit, configured to, in a case that the traffic processing request matches a temporary drainage rule for invoking a target service, send a service invocation request for the target service to at least a target downstream node defined by the temporary drainage rule, so as to be processed by the target downstream node through the target service; receiving a first response message returned by the target downstream node, and adding a drainage mark in a second response message returned by the upstream node under the condition that the first response message hits the temporary drainage rule, so that other service calling requests for the target service, which are generated subsequently by any business event, are at least sent to the target downstream node;
and a second sending unit, configured to send a service invocation request for the target service to an original downstream node defined by the original drainage rule, so that the original downstream node performs processing through the target service, when the service processing request is not matched with all the temporary drainage rules but is matched with an original drainage rule for invoking the target service.
According to a third aspect of one or more embodiments of the present specification, there is provided an electronic apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of any of the first aspects by executing the executable instructions.
According to a fourth aspect of one or more embodiments of the present description, there is provided a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method according to any one of the first aspect.
In the embodiment of the present specification, a service node processes a service processing request according to a maintained original drainage rule and a temporary drainage rule issued by a drainage pipe controller, and when a service processing request sent by an upstream node for any service event is received and the service processing request is matched with the temporary drainage rule for calling a target service, the service node forwards a service calling request originally sent to an original downstream node defined by the original drainage rule to a target downstream node defined by the temporary drainage rule, so that drainage from the original downstream node to the target downstream node is achieved. The service node adds the drainage mark in the second response message returned to the upstream node, so that other service calling requests for the target service subsequently generated by any service event are sent to the target downstream node, drainage is realized in the multi-stage service scene, modification of the routing rule of the service initiator is avoided, cost is reduced, and network resources are saved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments in the present specification, the drawings required to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments described in the present specification, and it is obvious for those skilled in the art that other drawings may be obtained according to these drawings without inventive labor.
Fig. 1 is a system architecture diagram of a service processing method according to an exemplary embodiment.
Fig. 2 is a flowchart of a service processing method according to an exemplary embodiment.
Fig. 3a is a schematic diagram of a service initiator initiating a service processing request according to an exemplary embodiment.
Fig. 3b is a schematic diagram illustrating a service node initiating a service processing request according to an exemplary embodiment.
FIG. 4a is a schematic diagram illustrating drainage according to a calling rule according to an exemplary embodiment.
FIG. 4b is a schematic diagram of draining according to replication rules according to an exemplary embodiment.
Fig. 5a is a schematic diagram of a service processing method in a multi-phase service scenario according to an exemplary embodiment.
Fig. 5b is a schematic diagram of a service processing method in a multi-phase service scenario according to another exemplary embodiment.
FIG. 6 is a flowchart illustrating an exemplary embodiment of determining validity of a temporary drainage rule.
Fig. 7 is a schematic diagram of an apparatus according to an exemplary embodiment.
Fig. 8 is a block diagram of a service processing device according to an exemplary embodiment.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all of the embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step should fall within the scope of protection of the present specification.
The business processing process roughly comprises the following steps: the service processing method includes that a service initiator sends a service processing request to a certain service node according to a service event, and the service node can send a service calling request to other service nodes when processing the received service processing request so as to call services of other service nodes. For example: the service node A receives the service processing request, and needs to call the service of the service node B in the process of processing the service processing request, at the moment, the service node A sends the service calling request to the service node B, after the service node B finishes processing the received service calling request, a response message can be returned to the service node A, and the service node A can continue to process the service processing request according to the response message returned by the service node B.
According to different situations, the service node can be divided into multiple stages in the process of processing the service, for example: the service node A sends a service processing request of a first stage to a service node B aiming at a certain service event, the service node B returns a response message after processing the service processing request of the first stage, the service node A starts the processing of the next stage after receiving the response message of the service node B, sends a service processing request of a second stage to a service node C, and the service node C processes the service processing request of the second node.
In a specific case, the service nodes B and C both need the service of the called service node C1 in the process of processing the service request, and at this time, if the called service node is to be converted from C1 to C2, a temporary drainage request needs to be sent for each stage of the service processing process. Obviously, in a complex service environment, some service processing may require hundreds or even thousands of stages, and the manner of sending requests for each stage may consume a large amount of network resources.
In the prior art, when a multi-stage service request is faced, a routing rule configured by a service initiator can be directly modified, so that each service node can process the service according to the modified routing rule. However, in different scenarios, the service initiator may be a client or a server, so the adaptation problem of the service initiator needs to be solved in the process of modifying the route, which makes the cost of modifying the routing rule rise.
To solve the above problems, the present specification provides a method for business processing, and fig. 1 is a system architecture diagram of a business processing method provided by an exemplary embodiment. As shown in fig. 1, the architecture diagram includes: a flow management and control party 11, a service initiator 12 and service nodes 13-17. The traffic control party 11 may control the traffic rules of the service initiator 12 and the service nodes 13 to 17 by sending the traffic rules. The service initiator 12 and the service nodes 13 to 17 may be configured with a flow directing component, and the flow directing component may store the flow directing rule sent by the flow controller 11, and determine a destination node to send the request according to the flow directing rule.
The service initiator 12 may initiate a service event for the service requirement, and the service node 13 sends a service processing request of the first phase to the service node 14 for the service event initiated by the service initiator 12. Service node 14 needs to invoke the services of service node 16 in processing the first phase of the traffic processing request, and therefore sends a service invocation request to service node 16. The service node 16 processes the received service invocation request and returns a first response message. The service node 14 may return a second response packet to the service node 13 according to the first response packet returned by the service node 16, the service node 13 may send a second-stage service processing request to the service node 15 according to the second response packet returned by the service node 14, the service node 15 processes the second-stage service processing request, and the service node 15 also needs to invoke the service of the service node 16 in the process of processing the received service processing request. In the process of the service processing, the service initiator 12 may be the service node 13 itself, or another service node different from the service nodes 13 to 17, which will be described in detail later and is not described herein again.
Service node 16 and service node 17 provide the same service, but may both be in different environments, or both may be different versions of the same application. In a specific case, the traffic controller 11 may send a temporary flow rule to the service node 14 to change a destination node of the service node sending the service invocation request, and send the service invocation request originally sent to the service node 16 to the service node 17.
Fig. 2 is a flowchart of a service processing method according to an exemplary embodiment. As shown in fig. 2, the method is applied to a target service node, where the target service node maintains an original drainage rule and a temporary drainage rule issued by a drainage pipe controller, and the method at least includes the following steps:
step 201, receiving a service processing request sent by an upstream node for any service event.
The target serving node may be any serving node in the traffic link. The original drainage rule maintained by the target service node can be a routing rule configured by the service node, and under the condition that the temporary drainage rule is not issued by the drainage tube controller, the service node processes all received service processing requests according to the routing rule. The temporary drainage rule issued by the drainage tube controller is opposite to the original drainage rule, the original drainage rule defines an original downstream node, and the temporary drainage rule defines a target downstream node.
A business event may refer to a pending event initiated in response to a business initiator for a business requirement, such as: in a payment scenario, the service event may be a payment event initiated by the payment client based on a payment requirement; in the query scenario, the business event may initiate a query event for the query system based on the query requirement. The service initiator may refer to a role of initiating a service event in a service link, and under different scenarios, the service initiator may be a server, a client, or even a user himself. For example: in a payment scene, a service initiator can be a payment client or a user with a payment requirement; in the query scenario, the service initiator may be a query server.
The upstream node of the target service node may be a service initiator of any service event, in which case, the upstream node as the service initiator may directly initiate a service processing request for the service event. As shown in fig. 3a, the service initiator 301 is an upstream node of the service node 302, and the service node 303 is a downstream node of the service node 302. The service originator 301 may initiate a service handling request directly for the service event and send the service handling request to the service node 302. In the process of processing the received service processing request, the service node 302 sends a service invocation request to the service node 303 to invoke the service of the service node 303.
Alternatively, the upstream node of the target service node may be another service node different from the target service node, in which case the upstream node may initiate the service processing request according to the service event initiated by the service initiator. As shown in fig. 3b, serving node 304 is a traffic originator, serving node 305 is an upstream node of serving node 306, and serving node 307 is a downstream node of serving node 306. Service node 304 initiates a traffic event in response to the traffic demand. Serving node 305 initiates a traffic handling request for the traffic event initiated by serving node 304 and sends the traffic handling request to serving node 306. Service node 306, in processing the received traffic processing request, sends a service invocation request to service node 307 to invoke the service of service node 307.
The embodiment shown in fig. 3a and 3b shows the service link structure in the case that the upstream node is the service initiator and the upstream node is not the service initiator, thereby enriching the diversity of the system structure.
Step 202, in a case that the business processing request matches a temporary drainage rule for invoking a target service, sending a service invocation request for the target service at least to a target downstream node defined by the temporary drainage rule, so as to be processed by the target downstream node through the target service; and receiving a first response message returned by the target downstream node, and adding a drainage mark in a second response message returned by the upstream node under the condition that the first response message hits the temporary drainage rule, so that other service call requests for the target service, which are generated subsequently by any business event, are at least sent to the target downstream node.
Step 203, in case that the service processing request is not matched with all temporary drainage rules but is matched with an original drainage rule for calling the target service, sending a service calling request for the target service to an original downstream node defined by the original drainage rule, so that the original downstream node performs processing through the target service.
As mentioned above, the original routing rule maintained by the service node may be a routing rule configured by the service node, and as shown in table 1, the routing table includes three parts: a service, a service node, and an IP address, wherein "service 1" and "service 2" are provided by "service node a," service node a "corresponds to an IP address of" IP1 "or" IP2, "service 3" is provided by "service node B," and service node B "corresponds to an IP address of" IP 3. The service node may determine the service node and the IP address corresponding to a service in the routing rule, and thus determine the original downstream node, if the traffic processing request matches the service.
Service Service node IP address
Service 1 Service node A IP1,IP2
Service 2 Service node A IP1,IP2
Service 3 Serving node B IP3
TABLE 1
The temporary flow guiding rule may refer to a rule issued by the flow controller and used for temporarily controlling the service processing path. As shown in table 2, the temporary routing rule may include three parts: "name", "service", and "IP". Wherein, the name represents the name of the temporary routing rule, the service represents the target service corresponding to the temporary drainage rule, and the IP represents the IP address of the target downstream node. The temporary drainage rule in table 1 is named "payment unit", the target service called by the temporary drainage rule is "com.pay.payservice", and the IP address of the corresponding target downstream node is "33.111.71.111". The request parameter may be used to match the service processing request with the temporary flow directing rule, and the service node may send the service invocation request to the target downstream node according to the target downstream node address when the request parameter of the service processing request matches a request parameter predefined in the temporary flow directing rule.
Name (R) Payment unit
Service com.pay.service.PayService
IP 33.111.71.111
TABLE 2
In an embodiment, the method further comprises: analyzing the request parameters of the service processing request; determining that the temporary flow rule matches the business processing request if the request parameters of the business processing request match the request parameters predefined in the temporary flow rule. Taking table 2 as an example, the analysis result of the request parameter of the service processing request is "com.pay.service.payservice", traverse all the temporary drainage rules, find out the temporary drainage rule matching the predefined request parameter in the temporary drainage rule, that is, find out the temporary drainage rule serving as "com.pay.service.payservice", and determine that the temporary drainage rule matches the service processing request. Of course, if all the temporary drainage rules are traversed, and the service without the temporary drainage rule is "com.pay.service.payservice", it indicates that the temporary drainage rule matching the service processing request is not maintained in the service node.
And in the case that the business processing request is matched with a temporary drainage rule for calling a target service, sending a service calling request aiming at the target service at least to a target downstream node defined by the temporary drainage rule so as to be processed by the target downstream node through the target service. It is noted that the service invocation request can be sent to more than the target downstream node, and the specific sending object of the service invocation request is determined according to the attribute of the drainage rule.
In an embodiment, the sending a service invocation request for the target service to at least a target downstream node defined by the temporary drainage rule includes: sending a service calling request to the target downstream node only under the condition that the temporary drainage rule is a calling rule; and respectively sending service calling requests to the target downstream node and the original downstream node under the condition that the temporary drainage rule is a replication rule.
Invoking may refer to forwarding a request originally sent to one node to another node, the original node no longer processing the request. As shown in fig. 4a, serving node 402 is an original downstream node of serving node 401, and serving node 403 is a target downstream node of the original downstream node of serving node 401. In the case where there is no correspondence of the temporary drainage rule, the service node 401 sends a service invocation request to the service node 402. If there is a matched temporary flow guiding rule and the temporary flow guiding rule is a call rule, the service node 402 only sends a service call request to the service node 403 and does not receive the service call request.
Replication may refer to sending a request originally sent to one node to two nodes, respectively, which the original node will also process. As shown in fig. 4b, in the case that there is a matching temporary flow guiding rule, and the temporary flow guiding rule is a replication rule, the service node 402 and the service node 403 both receive the service invocation request. As for the manner of determining whether the temporary flow guiding rule is the call rule or the copy rule, the query may be performed by using a parameter value of the temporary flow guiding rule or sending a request to the flow management and control party, which is not limited in this specification. The fig. 4a, 4b embodiment provides two types of drainage rules and illustrates the service invocation request sending case in both the case of the invocation rule and the case of the replication rule.
After receiving the service call request sent by the service node, the target downstream node can process the service call request through the target service, and after the processing is completed, the first response message is returned to the service node. The service node may receive a first response packet returned by the target downstream node, and add a drainage marker in a second response packet returned by the upstream node when the first response packet hits the temporary drainage rule, so that another service invocation request for the target service subsequently generated by any business event is at least sent to the target downstream node. The temporary drainage rule comprises a condition that the service node adds the drainage mark, and the first response message hits the temporary drainage rule under the condition that the first response message meets the condition in the temporary drainage rule. For example: the condition of adding the drainage mark defined by the temporary drainage rule is that the response message is returned by the target downstream node, the first response message meets the condition, and the service node adds the drainage mark in the second response message.
2 bytes 1 byte 1 byte 8 bytes 4 bytes x bytes
Magic number Marker bit Status bit Message ID Message length Message body (request parameter)
TABLE 3
There are multiple regions in the response message, as shown in table 3, the message includes 6 parts, where "magic number" is used to determine the protocol that the data packet follows; "flag" is used to indicate the type of serialization tool used for message body data; the "status bit" sets the status of the request and response; the 'message ID' is the unique identification of each message and is used for corresponding the request message and the response message; "message length" is used to record the length of the message body (i.e., determine the value of x); the "message body" stores the request parameters. Of the above six parts, the remaining 5 parts except the "message body" belong to a non-message body region, and the message body belongs to a message body region. The service node may add a drainage flag in the message body region of the second response packet, or may add a drainage flag in the non-message body region. If the drainage flag is in the non-message body region, the drainage flag can be transmitted to other nodes in a context transparent manner, for example: and setting the first four bits of the message ID of the first response message returned by the target downstream node as 0, and setting the first four bits of the message ID of the second response message as 0 by the service node under the condition that the response message of which the first four bits of the message ID are 0 is received. If the drainage flag is in the message body area, the drainage flag may be passed to other nodes in the form of request parameters. Of course, the manner of implementing drainage marker delivery is not limited to these two, and this specification does not limit this.
The service node has three criteria for determining the destination node of the service invocation request: the highest priority is that a destination node is determined according to the drainage mark, and if the service processing request is determined to carry the drainage mark, the service calling request is sent to a target downstream node corresponding to the drainage mark; secondly, determining a target node according to the temporary drainage rule, and sending a service calling request to a target downstream node defined by the temporary drainage rule under the condition that the service processing request is matched with the temporary drainage rule for calling the target service; and finally, sending a service calling request to an original downstream node defined by the original drainage rule under the condition that the service processing request does not carry the drainage mark and no temporary drainage rule is matched with the service processing request.
Referring to fig. 5a and 5b, a detailed description will be given below on a manner in which the service node determines a destination node for invoking a service, as shown in fig. 5a, a service node 501 is an upstream node of a service node 502 and a service node 503, a service node 504 is a downstream node of the service node 502 and the service node 503, and the service node 504 is an original downstream node corresponding to an original drainage rule maintained by the service node 502. In response to a certain service event, the service node 501 sends a service processing request a to the service node 502 in a first phase of service processing, and after receiving a response packet returned by the service node 502, sends a service processing request B to the service node 503 in a second phase of service processing. Whether the service node 502 or the service node 503 is required to send the service invocation request to the original downstream node or the target downstream node, so that the downstream node processes the service invocation request through the target service. In the case that the service processing request a does not match all the temporary drainage rules but matches the original drainage rule for invoking the target service, the service node 502 and the service node 503 send service invocation requests to the service node 504 (original downstream node) in the first stage and the second stage of the service processing, respectively.
As shown in fig. 5b, the service node 505 is a downstream node of the service node 502 and the service node 503, and the service node 505 is a target downstream node corresponding to the temporary drainage rule maintained by the service node 502. In the case that the service processing request a matches the temporary drainage rule for invoking the target service, the service node 502 sends a service invocation request to the service node 505 (target downstream node), and the service node 505 (target downstream node) returns a first response packet to the service node 502 after processing the service invocation request. The service node 502 parses the first response packet, and determines that the sender of the first response packet is the target downstream node, which hits the condition defined by the temporary drainage rule for adding the drainage marker, so that the service node 502 adds the drainage marker in the second response packet returned to the upstream node. After receiving the second response packet, the service node 501 starts the second phase of service processing, and sends a service processing request B to the service node 503, where the service processing request B includes a drainage flag. The service node 503 parses the service processing request B, and determines that the service processing request B carries the drainage flag, so that the service invocation request originally sent to the service node 504 (original downstream node) is sent to the service node 505 (target downstream node). In the embodiment, the service node adds the drainage marker in the second response message returned to the upstream node, so that other service invocation requests for the target service subsequently generated by any service event are sent to the target downstream node, thereby realizing drainage in the multi-stage service scene, avoiding modification of the routing rule of the service initiator, reducing the cost and saving the network resources.
The first response message can be used for feeding back a processing result of the target downstream node on the service call request besides being used as a judgment basis for adding the drainage mark. In an embodiment, the method further comprises: recording the processing result of the target downstream node on the service call request according to the first response message; and analyzing the processing result to test the performance of the target downstream node.
Further, in the case that the temporary drainage rule is a replication rule, the processing result of the service invocation request by the target downstream node may be compared with the processing result of the service invocation request by the original downstream node, and if the results of the two are consistent, it indicates that the performance of the target downstream node is qualified, and if the results of the two are not consistent, it indicates that the performance of the target downstream node is not qualified. The embodiment tests the performance of the target downstream node by recording and comparing the processing result of the downstream node on the service call request, thereby providing data support for determining the performance of the target downstream node.
According to the foregoing description of the temporary flow directing rule, it is obvious that the temporary flow directing rule may define a request parameter corresponding to the target service and a target downstream node corresponding to the target service, and the temporary flow directing rule may further define a condition for adding a flow directing mark (a sender of the response packet is the target downstream node). In addition, the temporary drainage rule may further include valid conditions and control parameters, and the service node may determine whether the temporary drainage rule is valid through the valid conditions and the control parameters.
In an embodiment, the temporary drainage rule comprises a control parameter and an effective condition; the method further comprises the following steps: updating the value of the control parameter in the temporary drainage rule according to the processing condition of the target downstream node on the received service calling request; determining that the temporary drainage rule is valid when the control parameter meets the valid condition; and determining that the temporary drainage rule is invalid under the condition that the control parameter does not meet the valid condition.
Name(s) Payment unit
Service com.pay.service.PayService
IP 33.111.71.111
Run time 9:00-10:00
Number of runs 0
Effective conditions Far lineThe time is 9:00-10:00 and the number of runs is less than 3
TABLE 4
The control parameters correspond to valid conditions, as shown in table 4, the control parameters may record the running time and the running number of the temporary drainage rule, and then the valid conditions are "the far-going time is 9:00-10:00 and run number less than 3", meaning a ratio of" 9:00-10: 00' and the number of the temporary drainage rules is less than 3 times, the temporary drainage rules are effective. Of course, the control parameters are not limited to the above two, and the present specification does not limit this.
In the case that there is a temporary flow rule matching the service processing request, it needs to determine whether the temporary flow rule is valid, as shown in fig. 6, and in step 601, the service processing request is received; and step 602, after determining that the service processing request matches the temporary drainage rule for invoking the target service, the service node executes step 603 to determine the control parameters and effective conditions of the temporary drainage rule.
If the control parameter meets the valid condition, step 605 is entered, and a service invocation request is sent to the target downstream node. Step 606, receiving the response message and updating the control parameters.
If the control parameter does not satisfy the valid condition, step 607 is entered, and a service invocation request is sent to the original downstream node.
In one embodiment, a new temporary flow guiding rule is issued by the flow management side, and the effective conditions of the temporary flow guiding rule are as follows: the quantity of drainage requests is less than 5, and the drainage duration is "9:00-10:00". In the following step 9:30, a traffic processing request matching the temporary drainage rule is received, since "9:30 "meet drainage duration" 9:00-10: 00' and the quantity of the drainage requests is 0 at the moment, so that the control parameters meet the effective conditions, and the service call requests are sent to the target downstream nodes. And receiving a first response message returned by the target downstream node, and updating the control parameters of the temporary drainage rule because the processing result fed back in the message is processing completion, and updating the number of the original drainage requests from 0 to 1. If the service node processes three service processing requests within the next 30 minutes, the temporary drainage rule is determined to be invalid after the upper limit of the number of the drainage requests is reached, and after the next 30 minutes, the temporary drainage rule is determined to be invalid whether the number of the drainage requests reaches the upper limit or not. The embodiment configures control parameters and effective conditions in the temporary drainage rule, so that a user can perform self-defined limitation on drainage, and meanwhile, the problem of overload of a target downstream node is avoided to a certain extent.
As described above, the original drainage rule defines an original downstream node, the temporary drainage rule defines a target downstream node, and the original downstream node and the target downstream node may be in different environments or may be different versions of the same application in the same environment. In an embodiment, the target downstream node and the original downstream node differ in at least one of: environment, running version.
Under the condition that the target downstream node is in the test environment and the original downstream node is in the source environment, the service node can record the processing results of the original downstream node and the target downstream node and compare the processing results with the processing results of the original downstream node and the target downstream node, so that the performance of the target downstream node is tested. When the target downstream node is the latest version and the original downstream node is the current version, the traffic controller issues the temporary flow rule corresponding to the target downstream node to the service node so as to perform test operation on the target downstream node.
Of course, the difference between the target downstream node and the original downstream node is not limited to these two cases, and this specification does not limit this.
In the query scene, a user has a query requirement, and a query system needs to query two balance parts, namely a payment balance and a balance bank balance, so that the query service is divided into two stages, namely a first stage of querying the payment balance and a second stage of querying the balance bank balance. At this time, a new query server needs to be tested, the query server to be tested is the same as the service provided by the original query server, but the environment of the query server is different, the original query server is in the original environment of the query scene, and the test query server is in the test environment.
Referring to fig. 1, a description is given below of a service processing method in an inquiry scenario, in fig. 1, a service initiator 12 is a user, a service node 13 is a balance inquiry client, a service node 14 is a payment balance inquiry service, a service node 15 is a balance treasure inquiry service, a service node 16 is an original inquiry server, and a service node 17 is a test inquiry server.
The user clicks the query button on the balance query client, a query event is generated, and the service node 13 (balance query client) sends a payment balance query request to the service node 14 (payment balance query service) for the query event. The service node 14 (payment balance inquiry service) needs to invoke the service of the inquiry server in the process of processing the payment balance inquiry request. Originally, without a test query server, service node 14 (payment balance query service) only sent a payment balance query service invocation request to service node 16 (original query server). However, to test the test query server, the traffic policer sends temporary drainage rules to service node 14 (payment balance query service) so that payment balance query service invocation requests are sent to service node 16 (original query server) and service node 17 (test query server). After receiving the request for invoking the payment balance inquiry service, the service node 17 (test inquiry server) inquires the payment balance using its own configured inquiry function, and returns a first response packet to the service node 14 (payment balance inquiry service). The service node 14 will receive two response messages, one of which is a response message returned by the service node 16 (original query server), and the message does not hit the temporary drainage rule, so the service node 14 does not add a drainage tag to the message; one is the first response message returned by the service node 17 (test query server), which hits the temporary drainage rule, so for the first response message, the service node 14 (payment balance query service) adds a drainage marker in the second response message returned to the service node 13. The service node 13 (balance inquiry client) sends a balance inquiry request to the service node 15 (balance inquiry service) in response to the received second response packet, and since the request carries a drainage flag, the service node 15 (balance inquiry service) sends a balance inquiry service call request to the service node 17, and the service node 17 (test inquiry server) queries the balance of the balance treasure by using a self-configured function, and feeds back a result to the service node 13 (balance inquiry client). The service node 13 (balance inquiry client) finally feeds back the payment balance inquiry result and the balance treasure balance inquiry result to the user.
In the above process, the service node 14 (payment balance inquiry service) may record the processing result of the service node 16 and the service node 17 for the payment balance inquiry service invocation request according to the response messages returned by the service node 16 and the service node 17, and compare the two results. If the results of the two are consistent, the query function of the test query server is qualified, and if the results of the two are inconsistent, the query function of the test query server is unqualified. According to the embodiment, the temporary drainage rule is issued to the service node, so that drainage is realized in a multi-stage service scene, and the performance of the equipment to be tested is tested.
Fig. 7 is a schematic block diagram of an apparatus provided in an exemplary embodiment. Referring to fig. 7, at the hardware level, the apparatus includes a processor 702, an internal bus 704, a network interface 706, a memory 708, and a non-volatile storage 710, although hardware required for other functions may be included. One or more embodiments of the present description can be implemented in software, such as by the processor 702 reading corresponding computer programs from the non-volatile storage 710 into the memory 708 and then executing. Of course, besides software implementation, the one or more embodiments in this specification do not exclude other implementations, such as logic devices or combinations of software and hardware, and so on, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Fig. 8 is a block diagram of a service processing apparatus according to an exemplary embodiment, where the apparatus may be applied to the device shown in fig. 8 to implement the technical solution of this specification; the device is applied to a target service node, the target service node maintains an original drainage rule and a temporary drainage rule issued by a drainage tube controller, and the device comprises:
a receiving unit 801, configured to receive a service processing request sent by an upstream node for any service event;
a first sending unit 802, configured to, in a case that the traffic processing request matches a temporary drainage rule for invoking a target service, send a service invocation request for the target service at least to a target downstream node defined by the temporary drainage rule, so as to be processed by the target downstream node through the target service; receiving a first response message returned by the target downstream node, and adding a drainage mark in a second response message returned by the upstream node under the condition that the first response message hits the temporary drainage rule, so that other service calling requests for the target service, which are generated subsequently by any business event, are at least sent to the target downstream node;
a second sending unit 803, configured to send a service invocation request for the target service to an original downstream node defined by the original drainage rule, where the service invocation request is not matched with all temporary drainage rules but is matched with an original drainage rule for invoking the target service, so that the original downstream node performs processing through the target service.
Optionally, the method further includes:
an analyzing unit 804, configured to analyze a request parameter of the service processing request;
a first determining unit 805, configured to determine that the temporary drainage rule matches the service processing request if the request parameter of the service processing request matches a request parameter predefined in the temporary drainage rule.
Optionally, the first sending unit 802 is specifically configured to:
sending a service calling request to the target downstream node only under the condition that the temporary drainage rule is a calling rule;
and respectively sending service calling requests to the target downstream node and the original downstream node under the condition that the temporary drainage rule is a replication rule.
Optionally, the method further includes:
a recording unit 806, configured to record, according to the first response packet, a processing result of the target downstream node on the service invocation request;
an analyzing unit 807 configured to analyze the processing result to test performance of the target downstream node.
Optionally, an upstream node of the target service node is a service initiator of the any service event; wherein the service processing request is initiated by the service initiator for the any service event;
or the upstream node of the target service node is another service node different from the target service node; wherein the service processing request is initiated by the other service node according to the any service event initiated by a service initiator.
Optionally, the temporary drainage rule includes a control parameter and an effective condition; further comprising:
an updating unit 808, configured to update a value of a control parameter in the temporary drainage rule according to a condition that the target downstream node processes the received service invocation request;
a second determining unit 809, configured to determine that the temporary drainage rule is valid if the control parameter satisfies the valid condition;
a third determining unit 810, configured to determine that the temporary drainage rule is invalid if the control parameter does not satisfy the validity condition.
Optionally, the target downstream node and the original downstream node differ in at least one of: environment, running version.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually manufacturing an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as ABEL (Advanced Boolean Expression Language), AHDL (alternate Hardware Description Language), traffic, CUPL (core universal Programming Language), HDCal, jhddl (Java Hardware Description Language), lava, lola, HDL, PALASM, rhyd (Hardware Description Language), and vhigh-Language (Hardware Description Language), which is currently used in most popular applications. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a server system. Of course, the present invention does not exclude that as future computer technology develops, the computer implementing the functionality of the above described embodiments may be, for example, a personal computer, a laptop computer, a vehicle-mounted human-computer interaction device, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device or a combination of any of these devices.
Although one or more embodiments of the present description provide method operation steps as described in the embodiments or flowcharts, more or fewer operation steps may be included based on conventional or non-inventive means. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of sequences, and does not represent a unique order of performance. When implemented in an actual device or end product, can be executed sequentially or in parallel according to the methods shown in the embodiments or figures (e.g., parallel processor or multi-thread processing environments, even distributed data processing environments). The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the presence of additional identical or equivalent elements in a process, method, article, or apparatus that comprises the recited elements is not excluded. For example, if the terms first, second, etc. are used to denote names, they do not denote any particular order.
For convenience of description, the above devices are described as being divided into various modules by functions, which are described separately. Of course, when implementing one or more of the present description, the functions of each module may be implemented in one or more software and/or hardware, or a module implementing the same function may be implemented by a combination of multiple sub-modules or sub-units, etc. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage, graphene storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
As will be appreciated by one skilled in the art, one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
One or more embodiments of the present description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
All the embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment. In the description of the specification, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the specification. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
The above description is merely exemplary of one or more embodiments of the present disclosure and is not intended to limit the scope of one or more embodiments of the present disclosure. Various modifications and alterations to one or more embodiments described herein will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of the present specification should be included in the scope of the claims.

Claims (10)

1. A business processing method is applied to a target service node, the target service node maintains an original drainage rule and a temporary drainage rule issued by a drainage tube controller, and the method comprises the following steps:
receiving a service processing request sent by an upstream node aiming at any service event;
in the case that the business processing request is matched with a temporary drainage rule for calling a target service, sending a service calling request aiming at the target service at least to a target downstream node defined by the temporary drainage rule so as to be processed by the target downstream node through the target service; receiving a first response message returned by the target downstream node, and adding a drainage mark in a second response message returned by the upstream node under the condition that the first response message hits the temporary drainage rule, so that other service calling requests for the target service, which are generated subsequently by any business event, are at least sent to the target downstream node;
and under the condition that the service processing request is not matched with all temporary drainage rules but is matched with an original drainage rule for calling the target service, sending a service calling request aiming at the target service to an original downstream node defined by the original drainage rule so as to be processed by the original downstream node through the target service.
2. The method of claim 1, further comprising:
analyzing the request parameters of the service processing request;
determining that the temporary drainage rule matches the business processing request if the request parameters of the business processing request match predefined request parameters in the temporary drainage rule.
3. The method of claim 1, the sending a service invocation request for the target service to at least a target downstream node defined by the temporary drainage rule, comprising:
sending a service calling request to the target downstream node only under the condition that the temporary drainage rule is a calling rule;
and respectively sending service call requests to the target downstream node and the original downstream node under the condition that the temporary drainage rule is a replication rule.
4. The method of claim 1, further comprising:
recording the processing result of the target downstream node to the service calling request according to the first response message;
and analyzing the processing result to test the performance of the target downstream node.
5. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
the upstream node of the target service node is a service initiator of any service event; wherein the service processing request is initiated by the service initiator for the any service event;
or the upstream node of the target service node is another service node different from the target service node; wherein the service processing request is initiated by the other service node according to the any service event initiated by a service initiator.
6. The method of claim 1, the temporary drainage rules comprising control parameters and effective conditions; the method further comprises the following steps:
updating the value of the control parameter in the temporary drainage rule according to the processing condition of the target downstream node on the received service calling request;
determining that the temporary drainage rule is valid when the control parameter meets the valid condition;
and determining that the temporary drainage rule is invalid under the condition that the control parameter does not meet the valid condition.
7. The method of claim 1, the target downstream node and the original downstream node differing in at least one of: environment, running version.
8. A business processing device is applied to a target service node, the target service node maintains original drainage rules and temporary drainage rules issued by a drainage tube controller, and the device comprises:
a receiving unit, configured to receive a service processing request sent by an upstream node for any service event;
a first sending unit, configured to, in a case that the traffic processing request matches a temporary drainage rule for invoking a target service, send a service invocation request for the target service to at least a target downstream node defined by the temporary drainage rule, so as to be processed by the target downstream node through the target service; receiving a first response message returned by the target downstream node, and adding a drainage mark in a second response message returned by the upstream node under the condition that the first response message hits the temporary drainage rule, so that other service calling requests for the target service, which are generated subsequently by any business event, are at least sent to the target downstream node;
and a second sending unit, configured to send a service invocation request for the target service to an original downstream node defined by the original drainage rule, so that the original downstream node performs processing through the target service, when the service processing request is not matched with all the temporary drainage rules but is matched with an original drainage rule for invoking the target service.
9. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of any one of claims 1-7 by executing the executable instructions.
10. A computer readable storage medium having stored thereon computer instructions which, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 7.
CN202211528181.6A 2022-11-30 2022-11-30 Service processing method and device Pending CN115914405A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211528181.6A CN115914405A (en) 2022-11-30 2022-11-30 Service processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211528181.6A CN115914405A (en) 2022-11-30 2022-11-30 Service processing method and device

Publications (1)

Publication Number Publication Date
CN115914405A true CN115914405A (en) 2023-04-04

Family

ID=86485232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211528181.6A Pending CN115914405A (en) 2022-11-30 2022-11-30 Service processing method and device

Country Status (1)

Country Link
CN (1) CN115914405A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120089677A1 (en) * 2010-10-08 2012-04-12 Traffix Systems Method and system for providing network services
CN108628947A (en) * 2018-04-02 2018-10-09 阿里巴巴集团控股有限公司 A kind of business rule matched processing method, device and processing equipment
US20190222526A1 (en) * 2018-01-14 2019-07-18 International Business Machines Corporation Adaptable circuit breaker chain for microservices
WO2019223390A1 (en) * 2018-05-21 2019-11-28 阿里巴巴集团控股有限公司 Authorization guidance data processing method, apparatus, device and system
US20200412644A1 (en) * 2019-06-28 2020-12-31 Beijing Baidu Netcom Science And Technology Co., Ltd. Content based routing method and apparatus
CN112506648A (en) * 2020-11-20 2021-03-16 鹏城实验室 Traffic stateless migration method of virtual network function instance and electronic equipment
WO2021098407A1 (en) * 2019-11-21 2021-05-27 中移物联网有限公司 Mec-based service node allocation method and apparatus, and related server
CN112860744A (en) * 2021-01-29 2021-05-28 北京电解智科技有限公司 Business process processing method and device
CN113240259A (en) * 2021-04-30 2021-08-10 顶象科技有限公司 Method and system for generating rule policy group and electronic equipment
CN113612686A (en) * 2021-06-29 2021-11-05 中国人民财产保险股份有限公司 Traffic scheduling method and device and electronic equipment
CN115185613A (en) * 2022-08-15 2022-10-14 康键信息技术(深圳)有限公司 Business rule configuration method, system, device and medium based on rule engine

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120089677A1 (en) * 2010-10-08 2012-04-12 Traffix Systems Method and system for providing network services
US20190222526A1 (en) * 2018-01-14 2019-07-18 International Business Machines Corporation Adaptable circuit breaker chain for microservices
CN108628947A (en) * 2018-04-02 2018-10-09 阿里巴巴集团控股有限公司 A kind of business rule matched processing method, device and processing equipment
WO2019223390A1 (en) * 2018-05-21 2019-11-28 阿里巴巴集团控股有限公司 Authorization guidance data processing method, apparatus, device and system
US20200412644A1 (en) * 2019-06-28 2020-12-31 Beijing Baidu Netcom Science And Technology Co., Ltd. Content based routing method and apparatus
WO2021098407A1 (en) * 2019-11-21 2021-05-27 中移物联网有限公司 Mec-based service node allocation method and apparatus, and related server
CN112506648A (en) * 2020-11-20 2021-03-16 鹏城实验室 Traffic stateless migration method of virtual network function instance and electronic equipment
CN112860744A (en) * 2021-01-29 2021-05-28 北京电解智科技有限公司 Business process processing method and device
CN113240259A (en) * 2021-04-30 2021-08-10 顶象科技有限公司 Method and system for generating rule policy group and electronic equipment
CN113612686A (en) * 2021-06-29 2021-11-05 中国人民财产保险股份有限公司 Traffic scheduling method and device and electronic equipment
CN115185613A (en) * 2022-08-15 2022-10-14 康键信息技术(深圳)有限公司 Business rule configuration method, system, device and medium based on rule engine

Similar Documents

Publication Publication Date Title
CN107450979B (en) Block chain consensus method and device
CN107579951B (en) Service data processing method, service processing method and equipment
CN108418851B (en) Policy issuing system, method, device and equipment
CN111767143A (en) Transaction data processing method, device, equipment and system
EP3869434A1 (en) Blockchain-based data processing method and apparatus, device, and medium
CN113495797B (en) Message queue and consumer dynamic creation method and system
CN111767144A (en) Transaction routing determination method, device, equipment and system for transaction data
CN108733457A (en) The implementation method and device of distributed transaction
CN111225018A (en) Request message processing method and device and electronic equipment
CN108399175B (en) Data storage and query method and device
CN113079224A (en) Account binding method and device, storage medium and electronic equipment
CN111694992A (en) Data processing method and device
CN110851207B (en) State transition management method and device, electronic equipment and storage medium
CN112751935A (en) Request processing method and device, electronic equipment and storage medium
CN115914405A (en) Service processing method and device
CN115904785A (en) Abnormity positioning method, device, equipment and readable storage medium
CN114625410A (en) Request message processing method, device and equipment
CN115033350A (en) Execution method and device of distributed transaction
CN113626295B (en) Method and system for processing pressure measurement data and computer readable storage medium
CN111931797B (en) Method, device and equipment for identifying network to which service belongs
CN114911750A (en) Data conversion method and device
CN111339117A (en) Data processing method, device and equipment
CN115865839B (en) ACL management method, ACL management device, communication equipment and storage medium
CN112907198B (en) Service state circulation maintenance method and device and electronic equipment
TWI844091B (en) Feature matching rule construction, feature matching method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination