CN115967678A - Flow limiting method and device, computer equipment and storage medium - Google Patents

Flow limiting method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN115967678A
CN115967678A CN202211704539.6A CN202211704539A CN115967678A CN 115967678 A CN115967678 A CN 115967678A CN 202211704539 A CN202211704539 A CN 202211704539A CN 115967678 A CN115967678 A CN 115967678A
Authority
CN
China
Prior art keywords
node
target
interface
current limiting
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211704539.6A
Other languages
Chinese (zh)
Inventor
张仁辉
郑海青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202211704539.6A priority Critical patent/CN115967678A/en
Publication of CN115967678A publication Critical patent/CN115967678A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application relates to a flow limiting method, a flow limiting device, computer equipment, a storage medium and a computer program product, which are applied to the technical field of cloud computing. The method comprises the following steps: under the condition that the flow access request is detected, confirming a target node and a target interface corresponding to the flow access request; generating current limiting information of a target interface under the condition that the current limiting information of the target interface does not exist in the current limiting information cache of the inquired target node, and adding the current limiting information of the target interface into the current limiting information cache of the target node; the flow limiting information of the target interface is used for the target interface to carry out flow limiting processing on the flow access request meeting the threshold condition; synchronizing the current limiting information of the target interface to a current limiting information cache of an associated node of the target node; the associated node is the node except the target node in the node cluster corresponding to the target node, and comprises each node of the target interface. By adopting the method, the high availability of the system can be improved.

Description

Flow limiting method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of cloud computing technologies, and in particular, to a method and an apparatus for limiting a flow, a computer device, a storage medium, and a computer program product.
Background
With the popularity of the internet, increasing traffic places higher demands on the high availability of the systems that provide the services, and traffic-limiting technology comes along.
In the conventional technology, the flow limiting technology is mainly to configure a flow limiting policy for a service interface which is expected to perform flow limitation, so that when the flow steeply increases or exceeds a threshold, the flow is limited according to the flow limiting policy, and the pressure of the system is further reduced.
However, in practice, the sudden increase and decrease of the traffic is often uncertain, and when a large number of traffic access requests access a service interface which is not configured with a current limiting policy, the pressure of a system receiving the traffic access requests may increase sharply, resulting in system downtime or service unavailability, so that the high availability of the system is low.
Disclosure of Invention
In view of the above, it is necessary to provide a traffic limiting method, apparatus, computer device, computer readable storage medium and computer program product capable of improving high availability of the system.
In a first aspect, the present application provides a method for flow restriction, applied to a pre-processor. The method comprises the following steps:
under the condition that a flow access request is detected, confirming a target node and a target interface corresponding to the flow access request;
generating the current limiting information of the target interface under the condition that the current limiting information of the target interface does not exist in the current limiting information cache of the target node, and adding the current limiting information of the target interface into the current limiting information cache of the target node; the flow limiting information of the target interface is used for the target interface to carry out flow limiting processing on the flow access request meeting the threshold condition;
synchronizing the current limiting information of the target interface to a current limiting information cache of an associated node of the target node; the associated node is each node of the target interface in the node cluster corresponding to the target node except the target node.
In one embodiment, the confirming the target node and the target interface corresponding to the traffic access request in the case that the traffic access request is detected includes:
under the condition that a flow access request is detected, identifying the request type of the flow access request and interface parameter information corresponding to the flow access request;
confirming a node cluster corresponding to the flow access request according to the request type;
and identifying a node comprising an interface matched with the interface parameter information from the node cluster as the target node, and taking an interface matched with the interface parameter information in the target node as the target interface.
In one embodiment, the generating of the current limiting information of the target interface includes, when the current limiting information of the target interface does not exist in the current limiting information cache of the target node, generating the current limiting information of the target interface, where the current limiting information of the target interface does not exist in the current limiting information cache of the target node
Acquiring historical access information of the target interface under the condition that current limiting information corresponding to interface parameter information of the target interface does not exist in current limiting information cache of the target node;
predicting a target concurrency threshold and a target throughput threshold of the target interface according to historical access information of the target interface;
and generating the current limiting information of the target interface according to the target concurrency threshold and the target throughput threshold of the target interface.
In one embodiment, before predicting a target concurrency threshold and a target throughput threshold of the target interface according to historical access information of the target interface, the method further includes:
acquiring historical access information of each interface under a plurality of node clusters;
according to historical access information of each interface under the plurality of node clusters, a concurrency threshold prediction model and a throughput threshold prediction model of the plurality of node clusters are built;
the predicting a target concurrency number threshold and a target throughput threshold of the target interface according to the historical access information of the target interface comprises the following steps:
confirming a target concurrency threshold prediction model and a target throughput threshold prediction model of the node cluster corresponding to the target node from the concurrency threshold prediction models and the throughput threshold prediction models of the node clusters;
and respectively inputting the historical access information of the target interface into the target concurrency number threshold prediction model and the target throughput threshold prediction model to perform threshold prediction, so as to obtain a target concurrency number threshold and a target throughput threshold of the target interface.
In one embodiment, the synchronizing the current limit information of the target interface to a current limit information cache of an associated node of the target node includes:
sending the current limiting information of the target interface to a configuration center; the configuration center is used for identifying interface parameter information of the target interface from the current limiting information of the target interface, confirming an associated node of the target node from a node cluster corresponding to the target node according to the interface parameter information, and sending the current limiting information of the target interface to the associated node, so that the Guan Lianjie point adds the received current limiting information of the target interface to a current limiting information cache of the node.
In a second aspect, the present application further provides another traffic limiting method applied to a node. The method comprises the following steps:
acquiring current limiting information of a target interface synchronized by a front-end processor under the condition that the node is a correlation node of a target node; the associated node is each node of the target interface in the node cluster corresponding to the target node except the target node; the flow limiting information of the target interface is used for the target interface to perform flow limiting processing on a flow access request meeting a threshold condition, the pre-processor is used for confirming a target node and a target interface corresponding to the flow access request under the condition that the flow access request is detected, generating the flow limiting information of the target interface under the condition that the flow limiting information of the target interface does not exist in a flow limiting information cache of the target node, and adding the flow limiting information of the target interface into the flow limiting information cache of the target node;
and adding the current limiting information into a current limiting information cache of the node.
In one embodiment, when the node is an associated node of the target node, before acquiring the current limit information of the target interface synchronized by the pre-processor, the method further includes:
under the condition that a configuration center is detected to receive the current limiting information of the target interface sent by the front-end processor, acquiring node parameter information of the target node from a current limiting information receiving record of the configuration center, and identifying interface parameter information of the target interface from the current limiting information of the target interface cached by the configuration center;
inquiring each interface of the node under the condition that the node and the target node belong to the same node cluster according to the node parameter information of the target node;
confirming the node as a related node of the target node under the condition that an interface matched with the interface parameter information of the target interface exists in each interface of the node;
the acquiring, when the node is a related node of the target node, current limiting information of a target interface synchronized by the pre-processor includes:
and under the condition that the node is the associated node of the target node, acquiring the current limiting information of the target interface generated by the pre-processor from a cache of the configuration center.
In a third aspect, the present application further provides a flow restriction device. The device comprises:
the node interface confirmation module is used for confirming a target node and a target interface corresponding to the flow access request under the condition of detecting the flow access request;
the current limiting information generating module is used for generating the current limiting information of the target interface under the condition that the current limiting information of the target interface does not exist in the current limiting information cache of the target node after being inquired out, and adding the current limiting information of the target interface into the current limiting information cache of the target node; the flow limiting information of the target interface is used for the target interface to carry out flow limiting processing on the flow access request meeting the threshold condition;
the current limiting information synchronization module is used for synchronizing the current limiting information of the target interface to a current limiting information cache of a related node of the target node; the associated node is each node of the target interface in the nodes except the target node in the node cluster corresponding to the target node.
In a fourth aspect, the present application further provides another flow restriction device. The device comprises:
the current limiting information acquisition module is used for acquiring current limiting information of a target interface synchronized by the front-end processor under the condition that the node is a correlation node of a target node; the associated node is each node of the target interface in the node cluster corresponding to the target node except the target node; the flow limiting information of the target interface is used for the target interface to perform flow limiting processing on a flow access request meeting a threshold condition, and the pre-processor is used for confirming a target node and a target interface corresponding to the flow access request under the condition that the flow access request is detected, generating flow limiting information of the target interface under the condition that the flow limiting information of the target interface does not exist in a flow limiting information cache of the target node, and adding the flow limiting information of the target interface into the flow limiting information cache of the target node;
and the current limiting information adding module is used for adding the current limiting information into the current limiting information cache of the node.
In a fifth aspect, the application further provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the following steps when executing the computer program:
under the condition that a flow access request is detected, confirming a target node and a target interface corresponding to the flow access request;
generating the current limiting information of the target interface under the condition that the current limiting information of the target interface does not exist in the current limiting information cache of the target node, and adding the current limiting information of the target interface into the current limiting information cache of the target node; the flow limiting information of the target interface is used for the target interface to carry out flow limiting processing on the flow access request meeting the threshold condition;
synchronizing the current limiting information of the target interface to a current limiting information cache of an associated node of the target node; the associated node is each node of the target interface in the nodes except the target node in the node cluster corresponding to the target node.
In a sixth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
under the condition that a flow access request is detected, confirming a target node and a target interface corresponding to the flow access request;
generating the current limiting information of the target interface under the condition that the current limiting information of the target interface does not exist in the current limiting information cache of the target node, and adding the current limiting information of the target interface into the current limiting information cache of the target node; the flow limiting information of the target interface is used for the target interface to carry out flow limiting processing on the flow access request meeting the threshold condition;
synchronizing the current limiting information of the target interface to a current limiting information cache of an associated node of the target node; the associated node is each node of the target interface in the nodes except the target node in the node cluster corresponding to the target node.
In a seventh aspect, the present application further provides a computer program product. The computer program product comprising a computer program which when executed by a processor performs the steps of:
under the condition that a flow access request is detected, confirming a target node and a target interface corresponding to the flow access request;
generating the current limiting information of the target interface under the condition that the current limiting information of the target interface does not exist in the current limiting information cache of the target node, and adding the current limiting information of the target interface into the current limiting information cache of the target node; the flow limiting information of the target interface is used for the target interface to carry out flow limiting processing on the flow access request meeting the threshold condition;
synchronizing the current limiting information of the target interface to a current limiting information cache of an associated node of the target node; the associated node is each node of the target interface in the node cluster corresponding to the target node except the target node.
According to the traffic limiting method, the traffic limiting device, the computer equipment, the storage medium and the computer program product, firstly, under the condition that a traffic access request is detected, a target node and a target interface corresponding to the traffic access request are confirmed; then generating the current limiting information of the target interface under the condition that the current limiting information of the target interface does not exist in the current limiting information cache of the inquired target node, and adding the current limiting information of the target interface into the current limiting information cache of the target node; finally, synchronizing the current limiting information of the target interface to the current limiting information cache of the associated node of the target node; the associated node is the node except the target node in the node cluster corresponding to the target node, and comprises each node of the target interface. Therefore, under the condition that the pre-processor detects the flow access request, corresponding flow limiting information is generated for the target interface which is not configured with the flow limiting information, and then the generated flow limiting information is synchronized to the flow limiting information cache of each node comprising the target interface, so that each interface in a system for receiving the flow access request can be configured with the corresponding flow limiting information. The flow limiting method based on the process can avoid system downtime or service unavailability caused by a large amount of flow access requests accessing certain interfaces which are not configured with flow limiting information under the condition that sudden increase and sudden decrease changes of access flow of the interfaces cannot be made clear, thereby improving the high availability of the system.
Drawings
FIG. 1 is a diagram of an exemplary flow restriction application;
FIG. 2 is a flow diagram illustrating a flow limiting method in one embodiment;
FIG. 3 is a flow chart illustrating a flow restriction method according to another embodiment;
FIG. 4 is a flowchart illustrating steps for obtaining current limit information for a target interface synchronized by a pre-processor in one embodiment;
FIG. 5 is a flow chart illustrating a flow restriction method in accordance with another embodiment;
FIG. 6 is a block diagram of the construction of a flow restriction device according to one embodiment;
FIG. 7 is a block diagram of another embodiment of a flow restriction device;
FIG. 8 is a diagram of an internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
The flow limiting method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. The application environment includes a pre-processor 102, a configuration center 104, a node 106, and an interface 108, and the pre-processor 102, the configuration center 104, the node 106, and the interface 108 communicate with each other via a network. The pre-processor 102 is configured to intercept the traffic access request before the traffic access request enters the interface 108, determine whether a target interface corresponding to the traffic access request is configured with current limiting information, and generate corresponding current limiting information for an interface not configured with current limiting information; the configuration center 104 is configured to cache current limiting information of each interface in a system that provides a service to the outside, and manage node cluster configuration, node parameter information, interface parameter information, and the like in the system; the limited flow information cache in the node 106 is used for caching the flow limiting information of each interface in the node 106; the interface 108 is configured to provide a corresponding service to a sending end of the traffic access request according to the traffic access request and the flow limiting information; it should be noted that one node may correspond to a plurality of interfaces, and one interface may also correspond to a plurality of nodes at the same time, for example, the node 1 includes an interface a, an interface B, an interface C, and an interface D, and the interface a may exist in the node 1, the node 2, and the node 3 at the same time. It should be further noted that, a plurality of node clusters exist in the system, each node cluster provides services of the same type or similar types to the outside, for example, the node cluster 1 provides a micro service call service to the outside, the node cluster 2 provides an HTTP (Hyper Text Transfer Protocol) call service to the outside, and the node cluster 3 provides a database access service to the outside; each node cluster includes one or more nodes, for example, the node cluster 1 includes a node 1, a node 2, and a node 3. The front-end processor 102 and the configuration center 104 may be implemented by separate servers or terminals, or may be implemented by a cluster of multiple servers or terminals.
Specifically, the pre-processor 102, in the case of detecting a traffic access request, confirms a target node 106 (such as node 1) and a target interface 108 (such as interface a) corresponding to the traffic access request; then, under the condition that the current limiting information of the target interface 108 does not exist in the current limiting information cache of the target node 106, generating the current limiting information of the target interface 108, and adding the current limiting information of the target interface 108 into the current limiting information cache of the target node 106; finally, synchronizing the current limiting information of the target interface 108 to the current limiting information cache of the associated node of the target node 106; the associated node is each node of the node cluster corresponding to the target node 106, except the target node 106, including the target interface 108.
In an exemplary embodiment, as shown in fig. 2, a flow limiting method is provided, which is described by taking the method as an example applied to the front-end processor in fig. 1, and includes the following steps:
step S202, when the traffic access request is detected, confirms the target node and the target interface corresponding to the traffic access request.
The target interface corresponding to the flow access request is an interface to be accessed by the flow access request, and is usually determined by the request content of the flow access request; the target node corresponding to the traffic access request is a node corresponding to an interface to be accessed by the traffic access request, and is usually determined by a load balancing policy in the system, where the load balancing policy in the system includes polling, random, retry, hash, and the like.
The system refers to a system that provides services to the outside, for example, a backend service that provides a web page, an application that provides a microservice, and the like.
Specifically, when detecting that a flow access request enters a system, a front-end processor firstly intercepts the flow access request and obtains a service to be obtained by the flow access request by analyzing the flow access request; then according to the service to be acquired of the flow access request, confirming a node cluster capable of providing service to a sending end of the flow access request; and then, based on the node cluster and a load balancing strategy in the system, confirming a target node corresponding to the flow access request in the node cluster, and confirming an interface which can provide service for a sending end of the flow access request in the target node as a target interface.
For example, referring to fig. 1, it is assumed that the traffic access request is to obtain a user list recorded by a micro service, and a node cluster capable of providing the service to the outside in the system is a node cluster 1, and it is determined that the node 1 including the interface a provides the service to the outside according to a load balancing policy inside the system, so that the node 1 is a target node, and the interface a in the node 1 is a target interface.
Step S204, generating the current limiting information of the target interface under the condition that the current limiting information of the target interface does not exist in the current limiting information cache of the inquired target node, and adding the current limiting information of the target interface into the current limiting information cache of the target node.
The current limiting information cache of the target node is used for recording the current limiting information of each interface included by the target node; the flow limiting information is a flow limiting rule of the interface, and the flow limiting information is used for the interface to carry out flow limiting processing on the flow access request meeting the threshold condition; the threshold condition is a flow limiting condition which can be loaded by each interface; the traffic limiting process is a drop process, i.e. the interface denies the invocation of the traffic access request in case the traffic of the traffic access request exceeds a threshold condition.
Specifically, after confirming a target node and a target interface corresponding to a traffic access request, a pre-processor needs to query whether current limiting information corresponding to the target interface exists in a current limiting information cache of the target node; if the current limiting information of the target interface cannot be inquired in the current limiting information cache of the target node, the pre-processor generates the current limiting information of the target interface, so that the current limiting information is configured for the target interface, the current limiting information of the target interface is added into the current limiting information cache of the target node, and then the pre-processor sends a flow access request to the target interface in the target node; if the current limiting information of the target interface can be inquired in the current limiting information cache of the target node, the pre-processor sends a flow access request to the target interface in the target node; and after receiving the flow access request, the target interface processes the flow access request based on the flow limiting information corresponding to the target interface in the flow limiting information cache of the target node.
For example, referring to fig. 1, the pre-processor queries the current limiting information cache of the node 1, and if the current limiting information cache does not have the current limiting information of the interface a, the pre-processor generates the current limiting information of the interface a, adds the generated current limiting information of the interface a to the current limiting information cache of the node 1, and sends a traffic access request to the interface a of the node 1; if the current limiting information of the interface A exists in the current limiting information cache, the pre-processor sends a flow access request to the interface A; after receiving the flow access request, the interface a processes the flow access request according to the corresponding flow limiting information in the flow limiting information cache of the node 1. For example, assuming that the traffic access request does not exceed the threshold condition, the interface a passes the traffic access request, and normally provides a service to the outside according to the traffic access request; if the flow access request exceeds the threshold condition, the interface A refuses the flow access request and provides no service to the outside.
Step S206, the current limiting information of the target interface is synchronized to the current limiting information cache of the associated node of the target node.
The relevant node is a node in the node cluster corresponding to the target node, except for the target node, including each node of the target interface, for example, for an interface a of a node 1, a node 2 and a node 3 are relevant nodes of the node 1; for interface C of node 1, node 2 is the associated node of node 1, and node 3 is the non-associated node of node 1 since it does not include interface C.
Specifically, the pre-processor synchronizes the generated current limiting information of the target interface to a node cluster corresponding to the target node, and the current limiting information of each node except the target node, including the target interface, is cached in the current limiting information of each node.
Referring to fig. 1, a node 2, and a node 3 are in a node cluster 1, and taking an interface a of the node 1 as an example, a pre-processor generates current limiting information of the interface a, adds the current limiting information of the interface a to a current limiting information cache of the node 1, synchronizes the current limiting information of the interface a to the current limiting information caches of the node 2 and the node 3, and configures current limiting information for the interface a of the node 2 and the node 3; taking an interface D of the node 3 as an example, the pre-processor generates the current limiting information of the interface D, adds the current limiting information of the interface D into a current limiting information cache of the node 3, synchronizes the current limiting information of the interface D into the current limiting information cache of the node 1, and configures the current limiting information for the interface D in the node 1.
In the traffic limiting method, a pre-processor firstly confirms a target node and a target interface corresponding to a traffic access request under the condition of detecting the traffic access request; then generating the current limiting information of the target interface under the condition that the current limiting information of the target interface does not exist in the current limiting information cache of the inquired target node, and adding the current limiting information of the target interface into the current limiting information cache of the target node; finally, synchronizing the current limiting information of the target interface to a current limiting information cache of an associated node of the target node; the associated node is the node except the target node in the node cluster corresponding to the target node, and comprises each node of the target interface. In this way, when the pre-processor detects a traffic access request, the pre-processor first generates corresponding flow limit information for a target interface which is not configured with flow limit information, and then synchronizes the generated flow limit information to the flow limit information cache of each node including the target interface, so that each interface in a system receiving the traffic access request can be configured with corresponding flow limit information. The flow limiting method based on the process can avoid system downtime or unavailable service caused by a large number of flow access requests accessing certain interfaces which are not configured with flow limiting information under the condition that sudden increase and sudden decrease changes of access flow of the interfaces cannot be made clear, thereby improving the high availability of the system.
In an exemplary embodiment, in the step S202, when the traffic access request is detected, confirming the target node and the target interface corresponding to the traffic access request specifically includes the following: under the condition that the flow access request is detected, identifying the request type of the flow access request and interface parameter information corresponding to the flow access request; confirming a node cluster corresponding to the flow access request according to the request type; and identifying a node comprising an interface matched with the interface parameter information from the node cluster as a target node, and taking an interface matched with the interface parameter information in the target node as a target interface.
The request type of the flow access request refers to the type of a service to be acquired by the flow access request, such as micro service call service, HTTP call service, database access service, and the like; the interface parameter information refers to an identifier of an interface, such as interface a, interface B, or interface C.
Specifically, the pre-processor intercepts a flow access request under the condition that the flow access request is detected, analyzes the flow access request, and identifies a request type carried by the flow access request and interface parameter information of an interface to be accessed; then according to the request type, confirming a node cluster corresponding to the flow access request in a plurality of node clusters of the system; and then according to the load balancing strategy, identifying a node comprising an interface matched with the interface parameter information from the corresponding node cluster, using the node as a target node corresponding to the flow access request, and confirming the interface matched with the interface parameter information in the target node as a target interface corresponding to the flow access request.
For example, the pre-processor identifies a request type of the traffic access request and interface parameter information of an interface to be accessed from the traffic access request, referring to fig. 1, where the type of the traffic access request is a call micro-service, and the interface parameter information of the interface to be accessed is an interface a, then the pre-processor first confirms a node cluster 1 from a plurality of node clusters of the system according to the request type of the "call micro-service", and confirms a node 1 in the node cluster 1 as a target node based on a load balancing policy of the system according to the interface parameter information of the "interface a", and confirms an interface a in the node 1 as a target interface.
In this embodiment, the pre-processor determines the node cluster, the target node and the target interface for the traffic access request according to information carried by the traffic access request, and then can determine whether current-limiting information of the target interface exists in a current-limiting information cache of the target node, and if not, generate the current-limiting information for the target interface, so that the target interface can perform corresponding current-limiting processing on the traffic access request according to the current-limiting information.
In an exemplary embodiment, in step S204, when the current limiting information of the target interface does not exist in the current limiting information cache of the target node, the current limiting information of the target interface is generated, which specifically includes the following contents: in the current limiting information cache of the target node, acquiring historical access information of the target interface under the condition that current limiting information corresponding to the interface parameter information of the target interface does not exist; predicting a target concurrency number threshold and a target throughput threshold of a target interface according to historical access information of the target interface; and generating the current limiting information of the target interface according to the target concurrency threshold and the target throughput threshold of the target interface.
The historical access information records past flow access requests received by the target interface, flow sizes of the flow access requests and processing states of the flow access requests, such as normal processing or abnormal processing, wherein the abnormal processing refers to a processing state in which the interface cannot provide services for the outside due to exceeding of a load. The concurrency number is the number of requests which can be simultaneously processed by the interface receiving the flow access request, and the throughput is the number of requests which can be processed by the interface receiving the flow access request in unit time.
Specifically, after determining a target node and a target interface, a pre-processor needs to query whether current-limiting information corresponding to the target interface exists in a current-limiting information cache of the target node through interface parameter information of the target interface; if yes, processing the flow access request through the target interface; if not, historical access information of the target interface is obtained, a target concurrency threshold value and a target throughput threshold value of the target interface are predicted according to the historical access information, and then current limiting information corresponding to the target interface is generated according to the target concurrency threshold value and the target throughput threshold value.
For example, referring to fig. 1, if the pre-processor cannot query the current-limiting information cache of the interface a in the current-limiting information cache of the node 1, the target concurrency threshold and the target throughput threshold of the interface a are predicted according to the historical access information of the interface a, so as to generate the current-limiting information for the interface a.
In this embodiment, the pre-processor obtains the target concurrency threshold and the target throughput threshold of the target interface through the historical access information of the target interface, and can generate corresponding current limiting information for an interface which is not configured with current limiting information, thereby avoiding system downtime or service unavailability caused by the fact that the received traffic access request exceeds the load capacity of the interface when the interface cannot definitely access traffic changes.
In an exemplary embodiment, before predicting the target concurrency number threshold and the target throughput threshold of the target interface according to the historical access information of the target interface in the above step, the following is further specifically included: acquiring historical access information of each interface under a plurality of node clusters; according to historical access information of each interface under the node clusters, a concurrency threshold value prediction model and a throughput threshold value prediction model of the node clusters are constructed.
The step of predicting the target concurrency number threshold and the target throughput threshold of the target interface according to the historical access information of the target interface specifically includes the following steps: confirming a target concurrency threshold prediction model and a target throughput threshold prediction model of the node cluster corresponding to the target node from the concurrency threshold prediction models and the throughput threshold prediction models of the node clusters; and respectively inputting the historical access information of the target interface into a target concurrency number threshold prediction model and a target throughput threshold prediction model for threshold prediction to obtain a target concurrency number threshold and a target throughput threshold of the target interface.
The concurrency threshold value prediction model and the throughput threshold value prediction model are obtained by performing deep learning on historical access information of each interface for multiple times, namely, concurrency conditions and throughput conditions which can be borne by the interfaces under the condition that the interfaces exceed loads and cause downtime.
Specifically, the pre-processor firstly constructs a corresponding concurrency threshold prediction model and a corresponding throughput threshold prediction model for each node cluster according to concurrency conditions and throughput conditions which can be borne by each interface under a plurality of node clusters; and then the pre-processor confirms a target concurrency threshold value prediction model and a target throughput threshold value prediction model of the node cluster corresponding to the target node from the multiple concurrency threshold value prediction models and throughput threshold value prediction models according to the node cluster corresponding to the flow access request, takes historical access information of the target interface as input, and respectively predicts and obtains the target concurrency threshold value and the target throughput threshold value of the target interface through the target concurrency threshold value prediction model and the target throughput threshold value prediction model.
In the embodiment, the pre-processor constructs a prediction model for the concurrency threshold value and the throughput of the interface based on historical access information of each interface in each node cluster; and obtaining the target concurrency number and the target throughput of the target interface through a target concurrency number threshold prediction model and a target throughput threshold prediction model based on the historical access information of the target interface. That is to say, the pre-processor can accurately predict the specific flow limitation condition that a certain target interface can load through the historical access information of the target interface and the corresponding prediction model, and provide a threshold basis for the target interface to generate the flow limitation information, thereby improving the high availability of the system.
In an exemplary embodiment, in the step S206, synchronizing the current limit information of the target interface to the current limit information cache of the association node of the target node specifically includes the following contents: and sending the current limiting information of the target interface to a configuration center.
The configuration center is used for identifying interface parameter information of the target interface from the current limiting information of the target interface, confirming an associated node of the target node from a node cluster corresponding to the target node according to the interface parameter information, and sending the current limiting information of the target interface to the associated node, so that the associated node adds the received current limiting information of the target interface to a current limiting information cache of the node.
Specifically, the pre-processor sends the current limiting information of a target interface to a configuration center, the configuration center identifies node parameter information of a target node and interface parameter information of the target interface from the received current limiting information, determines a node cluster according to the node parameter information, then confirms each node, except the target node, in the node cluster and including an interface corresponding to the interface parameter information as an associated node of the target node according to the interface parameter information, and sends the current limiting information of the target interface to each associated node, and after each associated node receives the current limiting information sent by the configuration center, the received current limiting information is added to a current limiting information cache of the node.
For example, referring to fig. 1, if the target node is node 1 and the target interface is interface a, the configuration center determines that the associated nodes are node 2 and node 3, and sends the received current limiting information of interface a to node 2 and node 3, and node 2 and node 3 add the received current limiting information to the corresponding current limiting information cache; for another example, if the target node is node 2 and the target interface is interface C, the configuration center determines that the associated node is node 1, and sends the received current limiting information of interface C to node 1, and node 1 adds the received current limiting information to the current limiting information cache.
In this embodiment, the pre-processor sends the current limiting information to the configuration center, and the configuration center confirms the associated node of the target node, so that the current limiting information of the target interface is synchronized to the current limiting information cache of the associated node of the target node, and then the current limiting information is generated for one target interface, thereby achieving the purpose of configuring the current limiting information for each corresponding interface, avoiding the possibility that the interface is down under the condition of sudden increase of the flow rate due to no configuration of the current limiting information, and further improving the high availability of the system.
In an exemplary embodiment, as shown in fig. 3, there is also provided a traffic limiting method, which is described by taking the method as an example applied to the node in fig. 1, and includes the following steps:
s302, under the condition that the node is the associated node of the target node, the current limiting information of the target interface synchronized by the front-end processor is obtained.
And S304, adding the current limiting information into the current limiting information cache of the node.
The associated node is a node in the node cluster corresponding to the target node except the target node, and comprises each node of the target interface; the flow limiting information of the target interface is used for the target interface to perform flow limiting processing on the flow access request meeting the threshold condition, and the pre-processor is used for confirming the target node and the target interface corresponding to the flow access request under the condition that the flow access request is detected, generating the flow limiting information of the target interface under the condition that the flow limiting information of the target interface does not exist in the flow limiting information cache of the target node, and adding the flow limiting information of the target interface into the flow limiting information cache of the target node.
Specifically, the node acquires the current limiting information of the target interface synchronized by the pre-processor under the condition that the node judges that the node belongs to the associated node of the target node, and adds the current limiting information of the target interface to the current limiting information cache of the node.
For example, referring to the node 2 in fig. 1, assuming that the target node is the node 1 and the target interface is the interface a, based on the above information, the node 2 may determine that it belongs to the associated node of the node 1, so as to obtain the current limiting information of the interface a synchronized by the pre-processor and add the current limiting information to the current limiting information cache; assuming that the target node is node 1 and the target interface is interface D, based on the above information, node 2 can determine that it does not belong to the associated node of node 1, so it is not necessary to obtain the current limiting information of interface D synchronized by the pre-processor.
It should be noted that, for the specific limitations of the flow limiting method, reference may be made to the specific limitations of step S202 to step S206, which are not described herein again.
In this embodiment, the node acquires the current limiting information of the target interface synchronized by the pre-processor when the node is the associated node of the target node, and adds the current limiting information to the current limiting information cache of the node, so that the purpose that the current limiting information can be configured for each interface corresponding to the target interface while the pre-processor generates the current limiting information for a specific target interface is achieved, the possibility that the interface is down under the condition of sudden increase of traffic due to no configuration of the current limiting information is avoided, and the high availability of the system is further improved.
In an exemplary embodiment, as shown in fig. 4, the steps in the traffic limiting method applied to the node, which are further used for acquiring the current limiting information of the target interface synchronized by the pre-processor, further include the following steps:
step 402, under the condition that it is detected that the configuration center receives the current limiting information of the target interface sent by the pre-processor, acquiring the node parameter information of the target node from the current limiting information receiving record of the configuration center, and identifying the interface parameter information of the target interface from the current limiting information of the target interface cached by the configuration center.
And step 404, inquiring each interface of the node under the condition that the node and the target node belong to the same node cluster according to the node parameter information of the target node.
And step 406, confirming the node as the associated node of the target node when the interface matched with the interface parameter information of the target interface exists in each interface of the node.
And step 408, acquiring the current limiting information of the target interface generated by the pre-processor from the cache of the configuration center under the condition that the node is the associated node of the target node.
The node parameter information is an identifier of a node, such as node 1, node 2, or node 3.
Specifically, the node monitors the change of the current limiting information of the configuration center in real time, when the configuration center receives the current limiting information of the target interface sent by the pre-processor, the node acquires the node parameter information of the target node to which the target interface belongs from the current limiting information receiving record of the configuration center, and identifies the interface parameter information of the target interface from the current limiting information of the target interface cached by the configuration center; and then the node confirms whether the node and the target node belong to the same node cluster or not according to the node parameter information, if so, the node inquires each interface of the node and confirms whether an interface matched with the interface parameter information of the target interface exists or not, if so, the node confirms the node as a related node of the target node, and the current limiting information of the target interface generated by the pre-processor is obtained from a cache of a configuration center.
For example, referring to a node 2 in fig. 1, if a target node is a node 1 and a target interface is an interface a, the node 2 can confirm that the node 1 belongs to the same node cluster and the node has an interface matching the interface a, so that the node 2 confirms that the node is an associated node of the target node and obtains current limiting information of the interface a from a cache of a configuration center; for another example, the target node is node 1, the target interface is interface D, and although node 2 and node 1 belong to the same node cluster, there is no interface matching interface D in node 2, so that it is not necessary to obtain the current limiting information of interface D from the configuration center. If the node is a node in the node cluster 2 or the node cluster 3, no operation is performed after the node is judged not to belong to the same node cluster as the node 1.
In this embodiment, the node sequentially determines whether the node and the target node belong to the same node cluster through the node parameter information of the target node and the interface parameter information of the target interface, and whether an interface matched with the interface parameter information of the target interface exists in the node, thereby determining whether the node is an associated node of the target node, and acquiring the current limiting information of the target interface from the configuration center when the node is the associated node of the target node, so that each associated node including the target interface can acquire the corresponding current limiting information while the current limiting information is generated for the target interface by the current processor, thereby avoiding the possibility that the interface is down under the condition of sudden increase of traffic due to no configuration of the current limiting information, and further improving the high availability of the system.
In an exemplary embodiment, as shown in fig. 5, a further flow restriction method is provided, which is described by taking the method as an example for application to the pre-processor in fig. 1, and includes the following steps:
in step S501, in the case that a traffic access request is detected, a request type of the traffic access request and interface parameter information corresponding to the traffic access request are identified.
Step S502, according to the request type, confirming the node cluster corresponding to the flow access request; and identifying a node comprising an interface matched with the interface parameter information from the node cluster as a target node, and taking an interface matched with the interface parameter information in the target node as a target interface.
Step S503, in the current limiting information cache of the target node, under the condition that there is no current limiting information corresponding to the interface parameter information of the target interface, obtaining the historical access information of the target interface.
Step S504, the historical access information of the target interface is respectively input into a target concurrency threshold prediction model and a target throughput threshold prediction model of the node cluster corresponding to the target node for threshold prediction, and a target concurrency threshold and a target throughput threshold of the target interface are obtained.
And step S505, generating the current limiting information of the target interface according to the target concurrency threshold and the target throughput threshold of the target interface, and adding the current limiting information of the target interface to the current limiting information cache of the target node.
Step S506, the current limiting information of the target interface is synchronized to the current limiting information cache of the associated node of the target node.
And the associated node is the node except the target node in the node cluster corresponding to the target node and comprises each node of the target interface.
It should be noted that the concurrency number threshold prediction model and the throughput threshold prediction model are constructed as follows: acquiring historical access information of each interface under a plurality of node clusters; according to historical access information of each interface under the node clusters, a concurrency threshold value prediction model and a throughput threshold value prediction model of the node clusters are constructed.
It should be further noted that the confirmation process of the target concurrency number threshold prediction model and the target throughput threshold prediction model of the node cluster corresponding to the target node is as follows: and confirming a target concurrency threshold value prediction model and a target throughput threshold value prediction model of the node cluster corresponding to the target node from the concurrency threshold value prediction models and the throughput threshold value prediction models of the node clusters.
In this embodiment, the pre-processor determines a node cluster, a target node, and a target interface for the traffic access request according to information carried by the traffic access request, so that the traffic access request can be correspondingly processed, and meanwhile, a target concurrency threshold and a target throughput threshold of the target interface are obtained according to historical access information of the target interface, and corresponding current limiting information is generated for an interface which is not configured with current limiting information. In this way, by generating corresponding current limiting information for an interface which is not configured with current limiting information, and synchronizing the generated current limiting information to the current limiting information cache of each node comprising the interface, each interface in a system receiving a flow access request can be configured with corresponding current limiting information, thereby achieving the purpose of configuring current limiting information for each corresponding interface by generating current limiting information for a target interface. The flow limiting method based on the process can avoid system downtime or service unavailability caused by a large amount of flow access requests accessing certain interfaces which are not configured with flow limiting information under the condition that the access flow of the interfaces cannot be clearly increased or decreased suddenly, thereby improving the high availability of the system.
In order to more clearly illustrate the flow limiting method provided in the embodiments of the present application, the flow limiting method is specifically described below with a specific embodiment. In an exemplary embodiment, the present application further provides an automatic flow limiting method based on flow discovery, which specifically includes the following steps:
step 1: the pre-processor intercepts the request based on the flow discovery, acquires the relevant data of the request, such as the address of the interface, interface parameter information and the like, inquires the flow limiting rule cache of the node corresponding to the interface, and judges whether the flow limiting process needs to be started or not. If the current limiting rule corresponding to the interface is not inquired in the current limiting rule cache, the current limiting rule is not configured in the access interface of the request, and the step 2 and the step 3 need to be skipped; if the current limiting rule corresponding to the interface can be inquired in the current limiting rule cache, it is indicated that the current limiting rule is configured on the requested access interface, and the step 4 is skipped.
Step 2: if the interface is not configured with the current limiting rule, the pre-processor generates the current limiting rule of the interface according to default current limiting threshold values defined in advance by programs, such as threshold values of a concurrency threshold value, a throughput threshold value and the like, loads the current limiting rule into a current limiting rule cache of a node corresponding to the interface, simultaneously pushes the current limiting rule into a configuration center, and directly accesses the interface without entering current limiting logic processing for requesting. Therefore, the interface can be brought into current limiting control when a next request is accessed, and the interface or service can not be unavailable due to sudden increase of the flow.
And step 3: and (2) all nodes in the node cluster of the node to which the interface belongs monitor the configuration center when starting, when the node pushes the current limiting rule and the interface information to the configuration center in the step (2), other nodes in the cluster can receive the current limiting rule pushed by the node in time through monitoring the configuration center, and if the node in the monitoring configuration center also has a corresponding interface, the current limiting rule is loaded into the current limiting rule cache of the node.
And 4, step 4: if the current limiting rule and the interface information exist in the current limiting rule cache set of the memory, it is indicated that the interface has realized flow control, the request can normally enter the flow limiting rule process for processing, the flow limiting process judges whether to release the request or discard the request, and finally the current limiting effect is realized, and the availability of the service system is ensured.
In this embodiment, the pre-processor automatically configures the current limiting rule for all nodes in the cluster by discovering and broadcasting the traffic to other nodes, thereby implementing automatic generation of the current limiting rule and automatic validation of the current limiting, and solving the downtime caused by sudden increase of access traffic to an interface that is not explicitly required to be limited. In the process, the current limiting rules need not to be configured by paying attention and combing which interfaces, the front-end processor sends the current limiting rules to the configuration center, and the nodes pull the current limiting rules from the configuration center to be asynchronously loaded, so that the system loss is reduced. In addition, in this embodiment, a configuration center listening-pushing mode is adopted, and meanwhile, a mode that after the current limiting rule is generated, the current limiting rule is stored in the database or the distributed cache, and other nodes in the cluster regularly access the database or the distributed cache to obtain the updated current limiting rule may also be used.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides a flow limiting device for implementing the above related flow limiting method. The solution to the problem provided by the device is similar to the solution described in the above method, so the specific limitations in one or more embodiments of the flow restriction device provided below can be referred to the limitations of the flow restriction method in the above, and are not described herein again.
In an exemplary embodiment, as shown in fig. 6, there is provided a flow restriction device applied to a pre-processor, including: a node interface confirmation module 602, a current limit information generation module 604, and a current limit information synchronization module 606, wherein:
and a node interface confirmation module 602, configured to, in a case that the traffic access request is detected, confirm the target node and the target interface corresponding to the traffic access request.
A current limiting information generating module 604, configured to generate current limiting information of the target interface when the current limiting information of the target interface does not exist in the current limiting information cache of the target node, and add the current limiting information of the target interface to the current limiting information cache of the target node; and the flow limiting information of the target interface is used for the target interface to carry out flow limiting processing on the flow access request meeting the threshold condition.
A current limiting information synchronization module 606, configured to synchronize current limiting information of the target interface to a current limiting information cache of an associated node of the target node; and the associated node is the node except the target node in the node cluster corresponding to the target node and comprises each node of the target interface.
In an exemplary embodiment, the node interface confirmation module 602 is further configured to, in a case that the traffic access request is detected, identify a request type of the traffic access request and interface parameter information corresponding to the traffic access request; confirming a node cluster corresponding to the flow access request according to the request type; and identifying a node comprising an interface matched with the interface parameter information from the node cluster as a target node, and taking an interface matched with the interface parameter information in the target node as a target interface.
In an exemplary embodiment, the current limiting information generating module 604 is further configured to obtain historical access information of the target interface when current limiting information corresponding to the interface parameter information of the target interface does not exist in the current limiting information cache of the target node; predicting a target concurrency number threshold and a target throughput threshold of a target interface according to historical access information of the target interface; and generating the current limiting information of the target interface according to the target concurrency threshold and the target throughput threshold of the target interface.
In an exemplary embodiment, the traffic limiting apparatus further includes a prediction model building module, configured to obtain historical access information of each interface under the multiple node clusters; and according to historical access information of each interface under the plurality of node clusters, constructing a concurrency threshold prediction model and a throughput threshold prediction model of the plurality of node clusters.
The current-limiting information generating module 604 is further configured to determine a target concurrency threshold prediction model and a target throughput threshold prediction model of the node cluster corresponding to the target node from the concurrency threshold prediction models and the throughput threshold prediction models of the multiple node clusters; and respectively inputting the historical access information of the target interface into a target concurrency number threshold prediction model and a target throughput threshold prediction model for threshold prediction to obtain a target concurrency number threshold and a target throughput threshold of the target interface.
In an exemplary embodiment, the current limiting information synchronization module 606 is further configured to send the current limiting information of the target interface to the configuration center; the configuration center is used for identifying interface parameter information of the target interface from the current limiting information of the target interface, confirming an associated node of the target node from a node cluster corresponding to the target node according to the interface parameter information, and sending the current limiting information of the target interface to the associated node, so that the associated node adds the received current limiting information of the target interface to a current limiting information cache of the node.
In an exemplary embodiment, as shown in fig. 7, there is provided another traffic limiting apparatus, applied to a node, including: a current limit information acquisition module 702 and a current limit information adding module 704.
A current limiting information obtaining module 702, configured to obtain current limiting information of a target interface synchronized by a pre-processor when the node is a node associated with a target node; the associated node is each node of the target interface in the node cluster corresponding to the target node except the target node;
the flow limiting information of the target interface is used for the target interface to perform flow limiting processing on a flow access request 5 meeting a threshold condition, and the pre-processor is used for confirming a target node and a target interface corresponding to the flow access request under the condition that the flow access request is detected, generating the flow limiting information of the target interface under the condition that the flow limiting information of the target interface does not exist in a flow limiting information cache of the target node, and adding the flow limiting information of the target interface into the flow limiting information cache of the target node.
And a 0 current limiting information adding module 704, configured to add the current limiting information to the current limiting information cache of the node.
In an exemplary embodiment, the traffic limiting apparatus applied to the node further includes an associated node confirmation module, configured to, in a case that it is detected that the configuration center receives the current limiting information of the target interface sent by the pre-processor, obtain node parameter information of the target node from a current limiting information receiving record of the configuration center, and identify interface parameter information of the target interface from the current limiting information of the target interface cached by the configuration center; 5 inquiring each interface of the node under the condition of confirming that the node and the target node belong to the same node cluster according to the node parameter information of the target node; and when the interfaces of the nodes exist the interfaces matched with the interface parameter information of the target interface, confirming the nodes as the associated nodes of the target nodes.
The current limiting information obtaining module 702 is further configured to, when the node is an associated node of the target node, obtain current limiting information of the target interface generated by the pre-processor from a cache of the configuration center.
0 the above flow limiting device may be wholly or partially implemented by software, hardware, or a combination thereof
And (5) realizing. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In an exemplary embodiment, a computer device is provided, which may be a service 5 device, the internal structure of which may be as shown in fig. 8. The computer equipment comprises a processor memory, input-
An Output interface (Input/Output, abbreviated as I/O) and a communication interface. Wherein the processor, the memory and the input-
The output interface is connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data such as the current limiting information of each interface and the historical access information of each interface. The input/output interface of the computer device is used for exchanging information between the processor and an external device. The communication interface of the computer device is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement a flow restriction method.
Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an exemplary embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an exemplary embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In an exemplary embodiment, a computer program product is provided, comprising a computer program which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, displayed data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the relevant laws and regulations and standards of the relevant country and region.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), magnetic Random Access Memory (MRAM), ferroelectric Random Access Memory (FRAM), phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the various embodiments provided herein may be, without limitation, general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing-based data processing logic devices, or the like.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (12)

1. A method of flow restriction, applied to a pre-processor, the method comprising:
under the condition that a flow access request is detected, confirming a target node and a target interface corresponding to the flow access request;
generating the current limiting information of the target interface under the condition that the current limiting information of the target interface does not exist in the current limiting information cache of the target node, and adding the current limiting information of the target interface into the current limiting information cache of the target node; the flow limiting information of the target interface is used for the target interface to carry out flow limiting processing on the flow access request meeting the threshold condition;
synchronizing the current limiting information of the target interface to a current limiting information cache of an associated node of the target node; the associated node is each node of the target interface in the node cluster corresponding to the target node except the target node.
2. The method of claim 1, wherein the validating the target node and the target interface corresponding to the traffic access request in case of detecting the traffic access request comprises:
under the condition that a flow access request is detected, identifying the request type of the flow access request and interface parameter information corresponding to the flow access request;
confirming a node cluster corresponding to the flow access request according to the request type;
and identifying a node comprising an interface matched with the interface parameter information from the node cluster as the target node, and taking an interface matched with the interface parameter information in the target node as the target interface.
3. The method according to claim 1, wherein the generating the current limit information of the target interface when the current limit information of the target interface does not exist in the current limit information cache of the target node is queried comprises
Acquiring historical access information of the target interface under the condition that current limiting information corresponding to interface parameter information of the target interface does not exist in current limiting information cache of the target node;
predicting a target concurrency threshold and a target throughput threshold of the target interface according to historical access information of the target interface;
and generating the current limiting information of the target interface according to the target concurrency threshold and the target throughput threshold of the target interface.
4. The method of claim 3, further comprising, prior to predicting a target concurrency threshold and a target throughput threshold for the target interface based on historical access information for the target interface:
acquiring historical access information of each interface under a plurality of node clusters;
according to historical access information of each interface under the plurality of node clusters, a concurrency threshold prediction model and a throughput threshold prediction model of the plurality of node clusters are built;
the predicting a target concurrency number threshold and a target throughput threshold of the target interface according to the historical access information of the target interface comprises the following steps:
confirming a target concurrency threshold prediction model and a target throughput threshold prediction model of the node cluster corresponding to the target node from the concurrency threshold prediction models and the throughput threshold prediction models of the node clusters;
and respectively inputting the historical access information of the target interface into the target concurrency number threshold prediction model and the target throughput threshold prediction model to perform threshold prediction, so as to obtain a target concurrency number threshold and a target throughput threshold of the target interface.
5. The method of claim 1, wherein synchronizing the current limit information of the target interface into a current limit information cache of an associated node of the target node comprises:
sending the current limiting information of the target interface to a configuration center; the configuration center is used for identifying interface parameter information of the target interface from the current limiting information of the target interface, confirming an associated node of the target node from a node cluster corresponding to the target node according to the interface parameter information, and sending the current limiting information of the target interface to the associated node, so that the Guan Lianjie point adds the received current limiting information of the target interface to a current limiting information cache of the node.
6. A traffic limiting method applied to a node, the method comprising:
acquiring current limiting information of a target interface synchronized by a front-end processor under the condition that the node is a correlation node of a target node; the associated node is each node of the target interface in the node cluster corresponding to the target node except the target node; the flow limiting information of the target interface is used for the target interface to perform flow limiting processing on a flow access request meeting a threshold condition, the pre-processor is used for confirming a target node and a target interface corresponding to the flow access request under the condition that the flow access request is detected, generating the flow limiting information of the target interface under the condition that the flow limiting information of the target interface does not exist in a flow limiting information cache of the target node, and adding the flow limiting information of the target interface into the flow limiting information cache of the target node;
and adding the current limiting information into a current limiting information cache of the node.
7. The method of claim 6, wherein before obtaining the current limiting information of the target interface synchronized by the pre-processor if the node is a node associated with the target node, further comprising:
under the condition that a configuration center is detected to receive the current limiting information of the target interface sent by the front-end processor, acquiring node parameter information of the target node from a current limiting information receiving record of the configuration center, and identifying interface parameter information of the target interface from the current limiting information of the target interface cached by the configuration center;
under the condition that the node and the target node belong to the same node cluster according to the node parameter information of the target node, inquiring each interface of the node;
under the condition that an interface matched with the interface parameter information of the target interface exists in each interface of the node, confirming the node as a related node of the target node;
the acquiring, when the node is a related node of the target node, current limiting information of a target interface synchronized by the pre-processor includes:
and under the condition that the node is the associated node of the target node, acquiring the current limiting information of the target interface generated by the pre-processor from a cache of the configuration center.
8. A flow restriction device, the device comprising:
the node interface confirmation module is used for confirming a target node and a target interface corresponding to the flow access request under the condition of detecting the flow access request;
the current limiting information generating module is used for generating the current limiting information of the target interface under the condition that the current limiting information of the target interface does not exist in the current limiting information cache of the target node after being inquired out, and adding the current limiting information of the target interface into the current limiting information cache of the target node; the flow limiting information of the target interface is used for the target interface to carry out flow limiting processing on the flow access request meeting the threshold condition;
the current limiting information synchronization module is used for synchronizing the current limiting information of the target interface to a current limiting information cache of an associated node of the target node; the associated node is each node of the target interface in the node cluster corresponding to the target node except the target node.
9. A flow restriction device, the device comprising:
the current limiting information acquisition module is used for acquiring current limiting information of a target interface synchronized by the front-end processor under the condition that the node is a correlation node of a target node; the associated node is each node of the target interface in the node cluster corresponding to the target node except the target node; the flow limiting information of the target interface is used for the target interface to perform flow limiting processing on a flow access request meeting a threshold condition, and the pre-processor is used for confirming a target node and a target interface corresponding to the flow access request under the condition that the flow access request is detected, generating flow limiting information of the target interface under the condition that the flow limiting information of the target interface does not exist in a flow limiting information cache of the target node, and adding the flow limiting information of the target interface into the flow limiting information cache of the target node;
and the current limiting information adding module is used for adding the current limiting information into the current limiting information cache of the node.
10. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
12. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 7 when executed by a processor.
CN202211704539.6A 2022-12-29 2022-12-29 Flow limiting method and device, computer equipment and storage medium Pending CN115967678A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211704539.6A CN115967678A (en) 2022-12-29 2022-12-29 Flow limiting method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211704539.6A CN115967678A (en) 2022-12-29 2022-12-29 Flow limiting method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115967678A true CN115967678A (en) 2023-04-14

Family

ID=87352541

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211704539.6A Pending CN115967678A (en) 2022-12-29 2022-12-29 Flow limiting method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115967678A (en)

Similar Documents

Publication Publication Date Title
CN110737668B (en) Data storage method, data reading method, related device and medium
US9754002B2 (en) Method and system for providing a synchronization service
US8069224B2 (en) Method, equipment and system for resource acquisition
CN111459750A (en) Private cloud monitoring method and device based on non-flat network, computer equipment and storage medium
EP2563062A1 (en) Long connection management apparatus and link resource management method for long connection communication
CN109117275B (en) Account checking method and device based on data slicing, computer equipment and storage medium
CN112130996A (en) Data monitoring control system, method and device, electronic equipment and storage medium
CN116360954B (en) Industrial Internet of things management and control method and system based on cloud edge cooperative technology
CN113360094A (en) Data prediction method and device, electronic equipment and storage medium
CN114466031B (en) CDN system node configuration method, device, equipment and storage medium
CN115967678A (en) Flow limiting method and device, computer equipment and storage medium
CN114205354B (en) Event management system, event management method, server, and storage medium
CN115914404A (en) Cluster flow management method and device, computer equipment and storage medium
CN111258860B (en) Data alarm method, device, computer equipment and storage medium
CN111405313B (en) Method and system for storing streaming media data
CN112185494B (en) Data storage method, device, computer equipment and storage medium
CN117478299B (en) Block chain consensus algorithm switching method, device and computer equipment
CN109451047A (en) Data transferring method, device, equipment and the storage medium of monitoring warning system
CN114328604B (en) Method, device and medium for improving cluster data acquisition capacity
CN117992243A (en) Load balancing method and device for middleware and computer equipment
CN112804335B (en) Data processing method, data processing device, computer readable storage medium and processor
CN115102784B (en) Rights information management method, device, computer equipment and storage medium
CN116405534A (en) Data processing method and device and computer equipment
CN116860480A (en) Call request processing method, device, computer equipment and storage medium
CN115061891A (en) System load capacity prediction method and device based on block chain

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination