CN117749889A - Request processing method and device, electronic equipment and storage medium - Google Patents

Request processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117749889A
CN117749889A CN202311871049.XA CN202311871049A CN117749889A CN 117749889 A CN117749889 A CN 117749889A CN 202311871049 A CN202311871049 A CN 202311871049A CN 117749889 A CN117749889 A CN 117749889A
Authority
CN
China
Prior art keywords
request
identifier
connection
address
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311871049.XA
Other languages
Chinese (zh)
Inventor
周日明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinzhuan Xinke Co Ltd
Original Assignee
Jinzhuan Xinke Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinzhuan Xinke Co Ltd filed Critical Jinzhuan Xinke Co Ltd
Priority to CN202311871049.XA priority Critical patent/CN117749889A/en
Publication of CN117749889A publication Critical patent/CN117749889A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a request processing method, a request processing device, electronic equipment and a storage medium. The method specifically comprises the following steps: acquiring a connection closing request sent by a load balancing route; wherein, the connection closing request carries an associated identifier; determining a node address corresponding to the association identifier according to the association identifier; and forwarding the connection closing request to the associated computing node corresponding to the associated identifier according to the node address so that the associated computing node processes the connection closing request. According to the technical scheme, the computing nodes are associated with the application requests in the mode of the association identifier, and even if the load balancing route processes the computing nodes with the application requests distributed in a wrong mode, the computing nodes can still forward the application requests back to the correct nodes for request processing according to the association identifier, so that the miss-killing and false-killing probability of the application requests in the processing process is greatly reduced, and the accuracy and efficiency of request processing are improved.

Description

Request processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of distributed database technologies, and in particular, to a method and apparatus for processing a request, an electronic device, and a storage medium.
Background
In order to meet the requirement of high concurrent processing performance of an application, a plurality of servers are usually deployed to share and process the application request, load balancing is deployed between a front-end application and a plurality of servers at a back end, the load balancing is used for carrying out route distribution on the application request, and the request is evenly distributed to the plurality of servers for processing. After the application establishes connection with the back-end server through load balancing, the requests in the connection are all sent back to the same server for processing. The same application sends multiple connection requests, which are distributed to multiple servers for processing at load balancing, but are prone to error handling of connection requests.
Currently, session maintenance can be configured on load balancing for the situation, but if the heartbeat request cannot be sent regularly, error processing still occurs, and the same application cannot distribute a plurality of computing nodes to process, so that not only is the accuracy of request processing reduced, but also the efficiency is reduced.
Disclosure of Invention
The application provides a request processing method, a request processing device, electronic equipment and a storage medium, so as to solve the problem of missing or false killing easily occurring in the request processing process of a distributed database, and improve the accuracy and efficiency of request processing.
According to an aspect of the present application, there is provided a request processing method applied to any one current computing node, the method including:
acquiring a connection closing request sent by a load balancing route; wherein, the connection closing request carries an associated identifier;
determining a node address corresponding to the association identifier according to the association identifier;
and forwarding the connection closing request to the associated computing node corresponding to the associated identifier according to the node address so that the associated computing node processes the connection closing request.
According to another aspect of the present application, there is provided a request processing apparatus applied to any one current computing node, the apparatus comprising:
the request acquisition module is used for acquiring a connection closing request sent by the load balancing route; wherein, the connection closing request carries an associated identifier;
the address determining module is used for determining the node address corresponding to the association identifier according to the association identifier;
and the request processing module is used for forwarding the connection closing request to the associated computing node corresponding to the associated identifier according to the node address so as to enable the associated computing node to process the connection closing request.
According to another aspect of the present application, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the request processing method described in any one of the embodiments of the present application.
According to another aspect of the present application, there is provided a computer readable storage medium storing computer instructions for causing a processor to execute a request processing method according to any embodiment of the present application.
According to the technical scheme, the computing nodes are associated with the application requests in the mode of the association identifier, and even if the load balancing route processes the computing nodes with the application requests distributed in a wrong mode, the computing nodes can still forward the application requests back to the correct nodes for request processing according to the association identifier, so that the miss-killing and false-killing probability of the application requests in the processing process is greatly reduced, and the accuracy and efficiency of request processing are improved.
It should be understood that the description of this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a request processing method according to a first embodiment of the present application;
FIG. 2A is a schematic diagram of an allocation association request applicable according to a second embodiment of the present application;
FIG. 2B is a diagram of an allocation association request applicable according to a second embodiment of the present application;
FIG. 2C is a diagram of an allocation association request applicable according to a second embodiment of the present application;
FIG. 2D is a diagram of an allocation association request applicable according to a second embodiment of the present application;
fig. 3 is a schematic structural diagram of a request processing device according to a third embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device implementing a request processing method according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of a request processing method according to an embodiment of the present application, where the method may be performed by a request processing device, and the request processing device may be implemented in hardware and/or software, and the request processing device may be configured in an electronic device. As shown in fig. 1, the method includes:
s110, acquiring a connection closing request sent by a load balancing route; wherein the close connection request carries an association identifier.
The load balancing route may be an equalizer deployed between a front-end application and a plurality of servers (e.g., distributed computing systems, etc.) at the back-end for distributing high concurrency demands. It can be appreciated that, in order to meet the requirement of high concurrent processing performance of the application, a user may share and process the application request by using multiple servers (or computing nodes), and distribute the application request to different servers or computing nodes more evenly through the load balancing route. The connection closing request belongs to one of application requests, and corresponds to the connection closing request, namely the connection closing request is initiated, and generally, when the connection needs to be killed (closed) after the connection is initiated, the connection closing request is initiated by a front-end application and distributed to a back-end for processing through a load balancing route. The type of application request is not limited in this application, and may be, for example, JDBC (Java Database Connectivity, java database connection) request, or the like.
The association identifier may be an identifier or code having an association relationship with a certain server or a certain computing node in the background. After any current computing node receives a connection closing request sent by the load balancing route, whether the connection closing request belongs to the self node or the server or not can be judged through the association identifier, so that the request processing judgment is carried out.
S120, determining the node address corresponding to the association identifier according to the association identifier.
Since the association identifier and the computing node have an association relationship, the node address of the corresponding computing node can be searched according to the association identifier, for example, a manner of presetting an association relationship table between the node address of each computing node and each identifier, and the like.
S130, forwarding the connection closing request to the associated computing node corresponding to the associated identifier according to the node address so that the associated computing node processes the connection closing request.
The associated computing node is a computing node corresponding to the associated identifier, and the associated identifier and the associated computing node have an associated relation. Thus, by means of the node address, the computing node corresponding to the association identifier is determined, which is used only for processing the computing node closing the connection request. It should be noted that, when a certain computing node (for example, the associated computing node) has previously accepted a request for initiating a connection and successfully established a connection, a connection closing request needs to be processed by the computing node when the connection needs to be killed. However, since the load balancing route distributes load balancing to all types of requests at the same time, there are cases where a close connection request that should be sent to the computing node is sent to other nodes. It may be understood that in the foregoing step of the embodiment of the present application, when the current computing node receives the connection closing request, the association identifier carried in the connection closing request may be verified first, the node address of the associated computing node corresponding to the association identifier may be determined, and the connection closing request may be forwarded to the node address, so that the associated computing node accurately kills the connection.
According to the technical scheme, the computing nodes are associated with the application requests in the mode of the association identifier, and even if the load balancing route processes the computing nodes with the application requests distributed in a wrong mode, the computing nodes can still forward the application requests back to the correct nodes for request processing according to the association identifier, so that the miss-killing and false-killing probability of the application requests in the processing process is greatly reduced, and the accuracy and efficiency of request processing are improved.
In an alternative embodiment, before obtaining the connection closing request sent by the load balancing route, the method may include: sending an idle mark inquiry request to a sharing component, determining a target identifier and associating with the address of the current computing node; based on the target identifier, a connection is constructed.
The shared component may be a database, or may be a distributed application coordination service, such as zookeeper. The shared component may be used to store an association identifier, it being understood that the association identifier may be a free identifier stored in the shared component prior to being associated. Is occupied when an access of a computing node (or a server) is received and an idle identifier is queried, becomes a target identifier, is associated with the computing node (or the server), and is fed back to a front-end application, so that the front-end application carries the identifier when a request related to the computing node (or the server) is subsequently sent out, and the like, and all application requests are tracked.
In one example, the front-end application sends a request for initiating a connection to a certain computing node, the computing node queries the shared component for free identifiers and occupies one of the free identifiers as a target identifier, associates with the address of the computing node, and feeds back the target identifier while returning a successful connection construction to the front-end application so that the front-end application carries the target identifier when sending a request related to the computing node again.
Correspondingly, the determining the node address corresponding to the association identifier according to the association identifier may include: sending a verification request to the sharing component according to the association identifier so that the sharing component verifies the association identifier and the address of the current computing node; and determining the node address corresponding to the sharing identifier according to the verification result fed back by the sharing component.
Continuing the steps of the foregoing embodiments, after receiving a new application request, the current computing node verifies the shared component according to the association identifier carried therein, where the shared component already stores association relationships between each identifier and the corresponding node address, so as to determine whether the node address of the current computing node matches with the association identifier, and if the verification result is not matched, the shared component feeds back the correct node address corresponding to the association identifier to the current computing node, so that the current computing node can forward the new application request according to the node address.
The optional implementation manner provides a feasible scheme for the verification process of the association identifier, and when the connection request is sent, the shared component is inquired and occupied by the idle identifier for address association, so that the front-end application can carry the association identifier in the subsequently sent connection closing request, and the computing node or the server can be helped to identify different requests, and the accuracy and the efficiency of request processing can be greatly improved.
In another alternative embodiment, before acquiring the connection closing request sent by the load balancing route, the method may further include: sending a rule query request to the sharing component, and determining the prefix name of the address of each node in the address marking rule; and generating the association identifier according to the prefix name.
The address marking rule may be a marking rule between an association identifier and an address of a computing node, and the prefix name may be a label of the association identifier for different node addresses, and may include a prefix or a suffix, for example, a prefix of "01-in all association identifiers corresponds to a computing node with a code of 01.
In another example, rules between the prefix of the connection and the node address are configured in advance in the shared component, i.e. one code is paired for each address as the prefix when the association identifier is generated. Therefore, when any computing node receives the connection initiating request, a rule query request is sent to the sharing component, the rule between the prefix and the node address is queried, the prefix corresponding to the computing node is obtained, and an association identifier associated with the computing node is generated and fed back to the front-end application. When the front-end application sends a connection closing request to the current computing node, an association identifier with prefix information is also sent.
Correspondingly, the determining the node address corresponding to the association identifier according to the association identifier may further include: determining the prefix name in the associated identifier according to the associated identifier; and determining the node address corresponding to the association identifier according to the prefix name and the address marking rule.
Continuing the previous example, if the current computing node receives the connection closing request sent by the front-end application, analyzing the association identifier carried by the request to obtain a code in the prefix name, so that the node address of the corresponding computing node is found according to the address marking rule, and if the node address does not accord with the address of the current computing node, the current computing node can forward the connection closing request according to the node address.
According to the technical scheme, another feasible scheme is provided for the verification process of the association identifier, the prefix names related to the node addresses are preset for all the computing nodes, and the association identifier is generated on the basis of the prefix names, so that node address information is stored in the identifier carried by the application request sent by the front-end application, the computing nodes or the server are helped to identify different requests, and the accuracy and the efficiency of request processing are greatly improved.
Example two
The present application examples are preferred embodiments provided on the basis of the foregoing embodiments, corresponding examples being made for two alternative ones of the foregoing examples. As shown in the figure, in the process of solving the problem that the associated multiple connections in the load balancing process are sent to the correct server for processing, the implementation manner is as follows:
the association identifiers generated by the backend servers remain different so that it is uniquely determined on which server the request is handled by the association identifier. The back-end server registers each association identifier and its own address information through the sharing component, or configures prefix rules of the association identifiers. The back-end server receives the association request sent by the load balancing, inquires the server address where the association identifier is located by inquiring the sharing component or according to the prefix rule, and forwards the request to the correct server node.
As shown in fig. 2A, the server registers the association of the identifier with the server subscription by querying the shared component for the free identifier, and based thereon, verifies and forwards the application request to the server address so that the application request is distributed to the correct server. The specific steps for establishing the connection are as follows:
and step 1, initializing connection by the application through a network protocol.
Step 2, load balancing routes the request to a certain server process (e.g. server 1).
Step 3, the server 1 queries the Related-ID (association identifier) which is not currently used, such as ID1, and occupies the Related-ID, and associates the ID1 with its own address (so as to be queried by other subsequent servers).
And 4, finishing registration of the sharing component and returning to id1.
And step 5, the server 1 returns a response of the initial connection and carries an id1 value.
And 6, load balancing sends the response back to the application.
The subsequent application initiates other requests on the connection, and the specific steps for initiating the association request are as follows:
and 7, the application initiates an association request.
Step 8, the request is routed to the back-end server through load balancing, and since the request is a new connection initiated by the application, the request is routed to other servers (e.g. server 2) according to the policy.
And 9, the server 2 initiates a query to the sharing component to query the server address corresponding to the id1.
Step 10, the sharing component returns the server address corresponding to id1 (here, the address of server 1).
Step 11, the server 2 forwards the request to the server 1.
Step 12, the server 1 receives the association request and processes the request.
Step 13, if the new request is to release the associated connection, the server 1 clears the information registered on the shared component (id 1 and associated server 1 address).
Step 14, the server 1 returns a processing response to the server 2.
Step 15, the server 2 returns a processing response to the load balancing.
And step 16, load balancing returns a processing response to the front-end application.
In yet another example, local configuration is performed in the shared component in advance, a corresponding relationship between a prefix rule of the connection and a server address is configured, an association identifier is generated according to a prefix corresponding to a server when the connection is constructed, and a front-end application is returned, so that when the server receives a next association request, according to the front end in the association identifier, which server should be processed to forward the association request can be directly identified, as shown in fig. 2B, specifically as follows:
configuring a prefix associated with a server in a shared component:
step 1, an operation and maintenance person modifies a local configuration file of a shared component or a server, writes a prefix of a server association identifier in a cluster, and a server number and a server address.
And 2, when the server is started, reading the shared component or the local configuration file. Or periodically read or triggered by a command to dynamically append the address configuration of the newly added server.
Initiating connection:
and 3, when connection is initiated, the application reaches a certain server, such as a server 1, through load balancing, the server 1 generates an association identifier id1 = 01xxxx (wherein 01 is a prefix, and the xxxx is controlled by the server by itself) according to prefix configuration rules, and the association identifier is returned to the application.
Killing the connection:
and 4, the application initiates a connection killing request, and the connection identifier is designated as id1=01xxxx.
Step 5, since the request is initiated on the new connection, the request is routed to other servers, such as server 2, after load balancing.
Step 6, the server 2 determines that the association identifier is generated by the server 1 according to the rule prefix 01 configured in advance.
Step 7, the server 2 forwards the request to the server 1 for processing.
And step 8, the server 1 receives the association request and processes the association request.
Step 9, the server 1 returns a processing response to the server 2.
Step 10, the server 2 returns a response to the load balancing.
And step 11, load balancing returns a response to the application.
Similarly, in addition to the above two embodiments, there are two similar embodiments for different computing nodes of the distributed database, taking the distributed database as an example, a database Connection identifier (Connection ID) is taken as an association identifier (Related-ID), and the distributed database has a plurality of computing nodes in the background, and each Connection identifier produced by itself and its own address information are registered by a sharing component.
The shared component can be a database or a ZooKeeper, and can also be a self-written component, and only the association relation between the registration and query Connection ID and the computing node where the Connection ID is located is needed to be realized.
When the computing node receives the load balancing and sends a KILL QUERY/KILL CONNECTION request, the computing node queries the correct computing node through the sharing component and forwards the request to the correct computing node for processing. As shown in fig. 2C, the specific steps are as follows:
the specific steps for establishing the connection are as follows:
and step 1, initiating initialization connection by the application through a communication protocol.
Step 2, load balancing routes the request to a certain computing node (e.g., computing node 1).
Step 3, the computing node 1 queries the Connection ID (for example, ID 1) which is not currently used, and occupies the Connection ID, and associates ID1 with its own address (so as to be queried by subsequent other computing nodes).
And 4, finishing registration of the sharing component and returning to id1.
And 5, returning a response of the initial connection by the computing node 1, wherein the response carries an id1 value.
And 6, load balancing sends the response back to the application.
The subsequent application initiates other requests on the connection, such as a close connection request, the specific steps of initiating a kill connection are as follows:
and 7, the application initiates a request for killing the connection.
Step 8, the request is routed to the back-end computing node through load balancing, and since the request is a new connection initiated by the application, the request is routed to other computing nodes (e.g., computing node 2) according to the policy.
And 9, the computing node 2 initiates a query to the sharing component to query the computing node address corresponding to the id1.
Step 10, the sharing component returns the computing node address corresponding to id1 (here, the address of computing node 1).
Step 11, computing node 2 forwards the request onto computing node 1.
Step 12, the computing node 1 receives a request for killing the QUERY KILL/CONNECTION KILL, KILLs the corresponding CONNECTION, and releases the resource. (in the case of full QUERY, only the currently requested resource is released, and in the case of full CONNECTION, the entire CONNECTION is released).
Step 13, if it is KILL CONNECTION, the computing node 1 clears the information (id 1 and associated computing node 1 address) registered on the shared component.
Step 14, the computing node 1 returns a processing response to the computing node 2.
Step 15, the computing node 2 returns a processing response to the load balancing.
And step 16, load balancing returns a processing response to the application.
In another example, specifically to a distributed database, the KILL QUERY/KILL connect request sent by the load balancing route is sent to the correct compute node to solve the problem of missing and false KILLs. The distributed database background has a plurality of computing nodes, and different computing nodes do not produce the same connection identifier. The association identifier uniquely determines on which computing node the association request is processed. The computing node registers each association identifier produced by itself and own address information through the sharing component, or configures prefix rules produced by the association identifiers. When the computing node receives the load balancing and sends a KILL QUERY/KILL CONNECTION request, the computing node obtains the correct computing node by querying the sharing component or according to the prefix rule, and forwards the request to the correct computing node for processing, as shown in FIG. 2D, specifically as follows:
step 1, an operation and maintenance person modifies a local configuration file of a shared component or a computing node, writes the local configuration file into the computing node in the cluster to generate a prefix of a connection identifier, and calculates a node number and a computing node address.
And 2, when the computing node is started, reading the shared component or the local configuration file. Or periodically read or triggered by a command to dynamically append the address configuration of the newly added computing node.
Initiating connection:
and 3, when connection is initiated, the application reaches a certain computing node, such as a computing node 1 through load balancing, the computing node 1 generates a connection identifier id1 = 01xxxx (wherein 01 is a prefix, and the xxxx is controlled by the computing node by itself) according to a prefix configuration rule, and the connection identifier id1 = 01xxxx is returned to the application.
Killing the connection:
step 4, an Application (APP) initiates a request for connection killing, and a connection identifier is designated as id1=01xxxx.
Step 5, since the request is initiated on the new connection, the request is routed to other computing nodes, such as computing node 2, after load balancing.
Step 6, the computing node 2 determines that the connection identifier is generated by the computing node 1 according to the rule prefix 01 configured in advance.
Step 7, the computing node 2 forwards the request to the computing node 1 for processing.
Step 8, the computing node 1 receives the KILL request KILL QUERY and releases the resources of the current request; if a request to KILL the CONNECTION KILLs CONNECTION is received, the resources of the entire CONNECTION are released.
Step 9, the computing node 1 returns a processing response to the computing node 2.
Step 10, the computing node 2 returns a response to load balancing.
And step 11, load balancing returns a response to the application.
The four embodiments can solve the problem that after a plurality of associated requests of the distributed system are subjected to load balancing, the associated requests can correctly return to an associated server, and the occurrence of missed processing or error processing of the associated requests is avoided. Compared with the prior art, special treatment is not needed for application and load balancing, and no function loss is caused.
In particular to a distributed database, the problem that the database connection is killed by omission and miskilling can be solved. Compared with the prior art, the method and the device have the advantages that the heartbeat is not required to be configured or kept alive by application and load balancing, the request of the KILL QUERY/KILL CONNECTION can be correctly sent to the correct computing node position for correct processing, and the problems of missing and false killing of the KILL QUERY/KILL CONNECTION are solved. The inventive method can maintain the processing performance of the JDBC standard interface, is compatible with the JDBC DRIVER of community or third party products, is transparent to application layers, JDBC DRIVERs, load balancing, networks and firewalls, does not bring about function loss, and has good adaptability.
Example III
Fig. 3 is a schematic structural diagram of a request processing device according to a third embodiment of the present application. As shown in fig. 3, the apparatus 300 includes:
a request acquisition module 310, configured to acquire a connection closing request sent by a load balancing route; wherein, the connection closing request carries an associated identifier;
an address determining module 320, configured to determine, according to the association identifier, a node address corresponding to the association identifier;
the request processing module 330 is configured to forward the connection closing request to the associated computing node corresponding to the association identifier according to the node address, so that the associated computing node processes the connection closing request.
According to the technical scheme, the computing nodes are associated with the application requests in the mode of the association identifier, and even if the load balancing route processes the computing nodes with the application requests distributed in a wrong mode, the computing nodes can still forward the application requests back to the correct nodes for request processing according to the association identifier, so that the miss-killing and false-killing probability of the application requests in the processing process is greatly reduced, and the accuracy and efficiency of request processing are improved.
In an alternative embodiment, the apparatus may include:
the mark association module is used for sending an idle mark inquiry request to the sharing component, determining a target identifier and associating with the address of the current computing node;
and the connection construction module is used for constructing the connection based on the target identifier.
Accordingly, the address determining module 320 may include:
the mark verification unit is used for sending a verification request to the sharing component according to the association identifier so that the sharing component verifies the association identifier and the address of the current computing node;
and the address determining unit is used for determining the node address corresponding to the sharing identifier according to the verification result fed back by the sharing component.
In another alternative embodiment, the apparatus may further include:
the prefix name determining module is used for sending a rule query request to the sharing component and determining the prefix name of the address of each node in the address marking rule;
and the mark generation module is used for generating an association identifier according to the prefix name.
Accordingly, the address determining module 320 may further include:
the prefix name determining unit is used for determining the prefix name in the association identifier according to the association identifier;
and the node address determining unit is used for determining the node address corresponding to the association identifier according to the prefix name and the address marking rule.
The request processing device provided by the embodiment of the application can execute the request processing method provided by any embodiment of the application, and has the corresponding functional modules and beneficial effects of executing the request processing methods.
Example IV
Fig. 4 shows a schematic diagram of the structure of an electronic device 10 that may be used to implement embodiments of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 4, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the respective methods and processes described above, such as a request processing method.
In some embodiments, the request processing method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the request processing method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured as a request processing method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out the methods of the present application may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this application, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solutions of the present application are achieved, and the present application is not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (10)

1. A method of request processing, applied to any one of current computing nodes, the method comprising:
acquiring a connection closing request sent by a load balancing route; wherein, the connection closing request carries an associated identifier;
determining a node address corresponding to the association identifier according to the association identifier;
and forwarding the connection closing request to an associated computing node corresponding to the associated identifier according to the node address, so that the associated computing node processes the connection closing request.
2. The method of claim 1, wherein prior to the obtaining the connection closing request sent by the load balancing route, the method comprises:
sending an idle mark inquiry request to a sharing component, determining a target identifier and associating with the address of the current computing node;
based on the target identifier, a connection is constructed.
3. The method according to claim 2, wherein determining the node address corresponding to the association identifier according to the association identifier comprises:
sending a verification request to the sharing component according to the association identifier so that the sharing component verifies the association identifier and the address of the current computing node;
and determining the node address corresponding to the sharing identifier according to the verification result fed back by the sharing component.
4. The method of claim 1, wherein prior to the obtaining the connection closing request sent by the load balancing route, the method further comprises:
sending a rule query request to the sharing component, and determining the prefix name of the address of each node in the address marking rule;
and generating the association identifier according to the prefix name.
5. The method of claim 4, wherein determining the node address corresponding to the association identifier according to the association identifier further comprises:
determining the prefix name in the associated identifier according to the associated identifier;
and determining the node address corresponding to the association identifier according to the prefix name and the address marking rule.
6. A request processing apparatus for application to any one of current computing nodes, the apparatus comprising:
the request acquisition module is used for acquiring a connection closing request sent by the load balancing route; wherein, the connection closing request carries an associated identifier;
the address determining module is used for determining the node address corresponding to the association identifier according to the association identifier;
and the request processing module is used for forwarding the connection closing request to the associated computing node corresponding to the associated identifier according to the node address so that the associated computing node processes the connection closing request.
7. The apparatus of claim 6, wherein the apparatus comprises:
the mark association module is used for sending an idle mark inquiry request to the sharing component, determining a target identifier and associating with the address of the current computing node;
and the connection construction module is used for constructing connection based on the target identifier.
8. The apparatus of claim 6, wherein the apparatus further comprises:
the prefix name determining module is used for sending a rule query request to the sharing component and determining the prefix name of the address of each node in the address marking rule;
and the mark generation module is used for generating the association identifier according to the prefix name.
9. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the request processing method of any one of claims 1-5.
10. A computer readable storage medium storing computer instructions for causing a processor to perform the request processing method of any one of claims 1-5.
CN202311871049.XA 2023-12-29 2023-12-29 Request processing method and device, electronic equipment and storage medium Pending CN117749889A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311871049.XA CN117749889A (en) 2023-12-29 2023-12-29 Request processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311871049.XA CN117749889A (en) 2023-12-29 2023-12-29 Request processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117749889A true CN117749889A (en) 2024-03-22

Family

ID=90259353

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311871049.XA Pending CN117749889A (en) 2023-12-29 2023-12-29 Request processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117749889A (en)

Similar Documents

Publication Publication Date Title
US11546294B2 (en) Network containers
US11687354B2 (en) Virtual machine management using onboarding operations and openstack control
CN111414208B (en) Application program starting method, device and equipment
WO2017173928A1 (en) Service processing system, service processing method and service updating method
CN111970198A (en) Service routing method, device, electronic equipment and medium
CN112671950B (en) Domain name processing method and device based on block chain, electronic equipment and storage medium
CN112437006A (en) Request control method and device based on API gateway, electronic equipment and storage medium
CN113361913A (en) Communication service arranging method, device, computer equipment and storage medium
CN106028311A (en) Terminal register method and device
CN110768911A (en) Efficient flow guiding method, device, equipment, system and storage medium
US10110670B2 (en) Allocation of service endpoints to servers
CN117749889A (en) Request processing method and device, electronic equipment and storage medium
US20220358055A1 (en) Method and apparatus for acquiring device information, storage medium and electronic device
US7912922B2 (en) Globally unique instance identification
CN115098528A (en) Service processing method and device, electronic equipment and computer readable storage medium
CN113556370A (en) Service calling method and device
CN114844951B (en) Request processing method, system, device, storage medium and product
EP3993366A2 (en) Network load balancer, request message distribution method, program product and system
US9548940B2 (en) Master election among resource managers
CN115037803B (en) Service calling method, electronic equipment and storage medium
CN115470224A (en) Cache updating method, device, equipment, system, storage medium and product
CN114615273B (en) Data transmission method, device and equipment based on load balancing system
CN112948246B (en) AB test control method, device and equipment of data platform and storage medium
US12009968B1 (en) Managing regional failover via DNS queries
US20240028346A1 (en) Linking kubernetes resources with underlying cloud infrastructure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination