CN114356575A - User request processing method and device - Google Patents

User request processing method and device Download PDF

Info

Publication number
CN114356575A
CN114356575A CN202210014150.2A CN202210014150A CN114356575A CN 114356575 A CN114356575 A CN 114356575A CN 202210014150 A CN202210014150 A CN 202210014150A CN 114356575 A CN114356575 A CN 114356575A
Authority
CN
China
Prior art keywords
request
target
processing
user request
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210014150.2A
Other languages
Chinese (zh)
Inventor
田立勇
黄圣彪
何剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Hode Information Technology Co Ltd
Original Assignee
Shanghai Hode Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Hode Information Technology Co Ltd filed Critical Shanghai Hode Information Technology Co Ltd
Priority to CN202210014150.2A priority Critical patent/CN114356575A/en
Publication of CN114356575A publication Critical patent/CN114356575A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application provides a user request processing method and a user request processing device, wherein the user request processing method comprises the following steps: receiving a user request of a target service, wherein the user request comprises a request identifier corresponding to a user; determining target processing end identifications distributed to the request identifications according to a rule for uniformly distributing a preset number of processing end identifications based on the request identifications, wherein the preset number of processing end identifications are uniformly distributed to each processing end; and searching a target processing terminal with the processing terminal identification, and calling an instance in the target processing terminal to process the user request. The scheme can improve the balance of user request processing.

Description

User request processing method and device
Technical Field
The present application relates to the field of computer technologies, and in particular, to a user request processing method. The application also relates to a user request processing device, a computing device and a computer readable storage medium.
Background
With the increase of the service volume and the number of users, a single machine room or a double machine room in the same city cannot support the continuous expansion of the service. Therefore, remote disaster recovery has become the standard allocation of services.
In the related art, multiple rooms are usually deployed for services. However, such deployment has a problem of uneven traffic distribution, i.e., the user request cannot be uniformly handled. Therefore, there is a need to provide a more balanced solution.
Disclosure of Invention
In view of this, an embodiment of the present application provides a user request processing method. The application also relates to a user request processing device, a computing device and a computer readable storage medium, which are used for solving the problem that the user request processing in the prior art is not balanced enough.
According to a first aspect of an embodiment of the present application, a method for processing a user request is provided, including:
receiving a user request of a target service, wherein the user request comprises a request identifier corresponding to a user;
determining target processing end identifications distributed to the request identifications according to a rule for uniformly distributing a preset number of processing end identifications based on the request identifications, wherein the preset number of processing end identifications are uniformly distributed to each processing end;
and searching a target processing terminal with the processing terminal identification, and calling an instance in the target processing terminal to process the user request.
According to a second aspect of the embodiments of the present application, there is provided a user request processing apparatus, including:
the system comprises a receiving module, a sending module and a receiving module, wherein the receiving module is configured to receive a user request of a target service, and the user request comprises a request identifier corresponding to a user;
the distribution module is configured to determine a target processing end identifier distributed to the request identifier according to a rule for uniformly distributing a preset number of processing end identifiers based on the request identifier, wherein the preset number of processing end identifiers are uniformly distributed to each processing end;
and the processing module is configured to search for a target processing terminal with the processing terminal identification and call an instance in the target processing terminal to process the user request.
According to a third aspect of embodiments herein, there is provided a computing device comprising a memory, a processor and computer instructions stored on the memory and executable on the processor, the processor implementing the steps of the user request processing method when executing the instructions.
According to a fourth aspect of embodiments of the present application, there is provided a computer-readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the user request processing method.
The method and the device for receiving the user request of the target service realize the receiving of the user request of the target service, wherein the user request comprises a request identifier corresponding to a user; based on the request identification, according to the rule of uniformly distributing the preset number of processing end identifications, determining the target processing end identification distributed to the request identification, searching the target processing end with the processing end identification, calling the example in the target processing end, and processing the user request. The processing end identifiers with the preset number are uniformly distributed to the processing ends, so that the processing ends can uniformly bear user requests. And according to the rule of uniformly distributing the processing end identifiers with the preset number, determining the target processing end identifiers distributed to the request identifiers, further searching the target processing end with the processing end identifiers, calling the instance in the target processing end to process the user request, and ensuring that the user request is uniformly distributed to different processing ends. And the request identification corresponds to the user, and the target processing terminal identification is distributed based on the request identification. Therefore, the user requests of the same user can be processed by the same target processing terminal, the problem that data synchronization needs to be additionally carried out due to the fact that different processing terminals process the user requests of the same user is solved, and the stability of the processing terminals is improved so that the balance of user request processing is guaranteed. Therefore, the scheme can improve the balance of user request processing.
Drawings
Fig. 1 is a flowchart of a user request processing method according to an embodiment of the present application;
FIG. 2 is a diagram illustrating an exemplary architecture of a user request processing system according to an embodiment of the present application;
FIG. 3 is a diagram illustrating an example use case of a user request processing system according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating a user request processing method applied to a user request processing system according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a user request processing apparatus according to an embodiment of the present application;
fig. 6 is a block diagram of a computing device according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The terminology used in the one or more embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the present application. As used in one or more embodiments of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present application refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments of the present application to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first aspect may be termed a second aspect, and, similarly, a second aspect may be termed a first aspect, without departing from the scope of one or more embodiments of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
First, the noun terms to which one or more embodiments of the present application relate are explained.
Service unitization: service unitization is to divide the architecture of a system according to a certain data characteristic latitude, and each divided unit can realize the same function.
Discovery service Discovery: refers to a function that allows other users to discover information stored during the service registration phase. The main advantage of service discovery is that services can be invoked only through service information, such as the name of the service, without knowing the deployment topology environment of the architecture, and a coordination mechanism for service publishing and searching is provided. The key to service discovery is a service registry, which is a database of available service instances that provides an administration Application Programming Interface (API) and a query API. Service instances use management APIs for registration and deregistration of services, and system components use query APIs to discover available service instances.
Example (c): an object used to implement a service is referred to herein and may also be referred to as a service instance. The service refers to a module supporting the functions provided by the computer, and may be an application program, a function, and the like.
Hash groove: the hash grooves are adopted in the Redis Cluster, and one Redis Cluster comprises 16384 (0-16383) hash grooves. All keys stored in the Redis Cluster Cluster will be mapped into these hash slots, with each key in the Cluster belonging to one of the 16384 hash slots. The cluster is partitioned according to the slots, and the data volume and the request number which are responsible for different nodes can be controlled by assigning different numbers of slots to each node of the cluster.
Region: the areas can be divided according to the difference of geographic positions. For example, different geographic locations in Beijing, Shanghai, etc. are different regions. The area is not limited by a specific size, and the area suitable for the application condition can be automatically divided according to the specific application condition of the project.
A Zone: it can be understood as a specific processing end within a region, such as a machine room. For example, if the region is beijing and the beijing has two rooms, namely, room zone1 and room zone 2, may be divided under the region beijing.
Machine room: in the IT industry, a machine room generally refers to a place for providing IT services for users and employees, such as telecommunications, internet communications, mobile, two-wire, power, and enterprise, where a server is stored.
Load balancing: load balancing is a key component of the highly available network infrastructure and is typically used to distribute workload to multiple work processing modules to improve the performance and reliability of a website, application, database, or other service. Wherein, the work processing module can be a server,
OpenResty: OpenResty is a high-performance Web platform based on the technology that Nginx (enginex) is a high-performance HTTP and reverse proxy Web server, and simultaneously provides IMAP/POP3/SMTP services) and Lua (a small scripting language, written by standard C, can be compiled and run on almost all operating systems and platforms), and a large number of fine Lua libraries, third-party modules and most of the dependence items are integrated inside the platform.
The game gateway: the API gateway of the cloud native architecture is obtained by secondary development based on open source Apache APISIX. The cloud native architecture is a set of architecture principles and design modes based on a cloud native technology, and aims to strip off a non-service code part in cloud application to the maximum extent, so that cloud facilities take over a large number of original non-functional characteristics (such as elasticity, toughness, safety, observability, gray level and the like) in the application, services are free from non-functional service interruption troubles, and the cloud native architecture has the characteristics of light weight, agility and high automation. Apache APISIX is a dynamic, real-time, high-performance API gateway, and the language and development platform is OpenResty.
Uniform Resource Locator (URL): is a representation method for specifying the location of information on a web service program of the internet.
In the present application, a user request processing method is provided, and the present application relates to a user request processing apparatus, a computing device, and a computer-readable storage medium, which are described in detail in the following embodiments one by one.
Fig. 1 is a flowchart illustrating a user request processing method according to an embodiment of the present application, which specifically includes the following steps:
s102, receiving a user request of a target service, wherein the user request comprises a request identifier corresponding to a user;
in a specific application, the user request of the target service refers to a request sent by a user for the target service. The user request may be for requesting a service provided by the target service, updating the target service, etc., as is reasonable. The specific target service may be selected according to application requirements, which is not limited in this embodiment, for example, the target service may be a game service, a commodity transaction service, a video service, and the like. The request identification corresponding to the user is used for ensuring that the user request of the same user is processed by the same target processing terminal. Therefore, the request identifier corresponding to the user may specifically include: user ID, game ID in a game scene, merchandise category ID in a shopping mall scene, and the like. Also, the request identification may be obtained from the user request before performing step S104. The request types of the user requests are different, and the user requests can be obtained in different modes. Alternative embodiments are described in detail below to facilitate understanding and reasonable layout.
And S104, based on the request identifier, determining a target processing end identifier allocated to the request identifier according to a rule for uniformly allocating a preset number of processing end identifiers, wherein the preset number of processing end identifiers are uniformly allocated to each processing end.
In a specific application, the processing terminal is used for supporting the implementation of a target service, and may be a machine room, a server, and the like. The processing end identifier may be an area identifier in a scenario of unitizing a machine room or a server, or an identifier of the processing end itself. The rule for uniform distribution of the identifiers for the preset number of processing terminals may be various. For example, a rule for directly and uniformly distributing a preset number of processing terminal identifiers may be used. For example, 10 processing end identifiers are sequentially allocated with different request identifiers, and in the next allocation after the allocation is completed, the request identifiers are again allocated sequentially from the first processing end identifier, and so on. Or, for example, the rule of obtaining the hash slot by using the request identifier and uniformly distributing the hash slot for each processing end may be used. For ease of understanding, the second example is specifically described below in the form of an alternative embodiment.
In an optional implementation manner, the determining, based on the request identifier and according to a rule of uniformly allocating a preset number of processing end identifiers, a target processing end identifier allocated to the request identifier may specifically include the following steps:
inputting the request identification into a hash slot algorithm to obtain a target hash slot;
and searching a target processing end identifier corresponding to the target hash slot from the pre-established corresponding relation between the hash slot and the processing end identifier.
In a specific application, each processing end in the present application may belong to a processing end cluster, for example, a Redis cluster. 16384 hash slots are built in the processing side cluster, when a key-value needs to be placed in the Redis cluster, the Redis cluster calculates a hash value for the key by using the crc16 algorithm (a hash algorithm), and the hash value is used for remainder calculation of 16384. Thus, each key corresponds to a hash slot with the number between 0 and 16383, and the Redis maps the hash slot to different nodes according to the rule that the number of nodes mapped in the Redis cluster is equal. Thus, the use of hash slots has the benefit that nodes can be added or removed easily. When the nodes need to be added, only some hash grooves of the nodes except the new node in the Redis cluster need to be moved to the new node; when the node needs to be removed, only the hash slot on the removed node needs to be moved to the node except the removed node in the Redis cluster. That is to say, the slot space of the Redis Cluster can be distributed in a user-defined mode, and therefore the partition can be guaranteed to be capable of defining the size and the position in a user-defined mode.
In addition, in a scenario of unitizing the processing end or the cluster to which the processing end belongs, the correspondence between the pre-established hash slot and the processing end identifier may be a unit ID correspondence rule, which refers to a correspondence between a unit ID and the region and zone. For example:
Figure BDA0003459213930000081
wherein, region1 represents the region ID, and zone1 represents the processing terminal ID. In a specific application, the division of the unit may be performed according to application requirements, for example, one unit identifier may correspond to multiple processing terminals, which is described above for exemplary illustration only. On the basis, initialization is carried out according to a preset unitization rule: the 16384 hash slot correspondences are evenly distributed to the preconfigured number of unit IDs. For example, 3 cell IDs, 5462 hash slots can be assigned to cell ID1, 5461 hash slots for cell ID2 and cell ID 3. Thus, the corresponding relationship between the hash slot and the processing end identifier is established, for example, if the request identifier is the user ID, the corresponding slot position, that is, the target hash slot, can be calculated by the hash slot algorithm, and then the corresponding unit ID is searched according to the slot position. Under the condition that the unit ID is equivalent to the processing end identification, finding the unit ID, namely finding the target processing end identification; in the case that there is a correspondence between the unit ID and the processing end identifier, the unit ID, that is, the target processing end identifier corresponding to the unit identifier, may be searched from a correspondence between the unit identifier and the processing end identifier that is established in advance.
S106, searching a target processing terminal with a processing terminal identification, and calling an instance in the target processing terminal to process the user request.
In a specific application, a specific process of searching for a target processing end with a processing end identifier can be divided into a plurality of processes according to whether a disaster tolerance rule exists or not. Alternative embodiments are described in detail below to facilitate understanding and reasonable layout. And, the manner of processing the user request by the instance in the call target processing terminal can be various. This is explained in more detail below in the form of alternative embodiments.
In an optional implementation manner, the processing, by the instance in the invocation target processing end, the user request may specifically include the following steps:
acquiring a plurality of registered example information, and searching target example information corresponding to the processing terminal identification from the plurality of example information;
invoking the instance with the target instance information handles the user request.
In this embodiment, at least one piece of target instance information corresponding to the processing end identifier may be included. In addition, the method can be particularly applied to the condition that the running performance of the service instance with the target instance information reaches the preset index, so that any instance with the target instance information can be directly called to process the user request without judging whether the instance reaches the preset load balancing condition.
In another optional implementation manner, the processing the user request by the instance in the call target processing end may specifically include the following steps:
acquiring a plurality of registered example information, and searching target example information corresponding to the processing terminal identification from the plurality of example information;
and calling the instance which reaches the preset load balancing condition from a plurality of instances with target instance information at the target processing end to process the user request.
In a specific application, the user request may be processed by the gateway, and therefore, the manner of acquiring the registered multiple instance information may include: the gateway utilizes the Discovery function provided by the target service to discover the service and acquire the service instance information stored in the service registration stage, namely the instance information. Illustratively, the service instance information may include: host (host) information, port (port) information, instance weight (weight), instance status (status), area to which processing end belongs (region) information, and processing end (zone)) information. Illustratively, the game gateway can use Discovery of target services to discover service instance information stored in a service registration stage; selecting candidate service instances deployed in the target zone from all available service instances according to the service instance information; wherein, the target zone is the target processing terminal; and under the condition that a plurality of candidate service instances are available, the target machine room with the target zone can determine the target service instance according to the load balancing strategy. In this way, the gateway can invoke the instance that meets the preset load balancing condition, i.e., the target service instance, to process the user request. The preset load balancing condition may be specifically set according to an application requirement, which is not limited in this embodiment. For example, the preset load balancing condition may include: it is reasonable to have the least number of pending user requests assigned to an instance, the most efficient completion of processing of currently processed user requests, no currently processed user requests by an instance, and so on.
According to the embodiment of the application, the preset number of processing end identifiers are uniformly distributed to the processing ends, and the processing ends can be guaranteed to uniformly bear user requests. And according to the rule of uniformly distributing the processing end identifiers with the preset number, determining the target processing end identifiers distributed to the request identifiers, further searching the target processing end with the processing end identifiers, calling the instance in the target processing end to process the user request, and ensuring that the user request is uniformly distributed to different processing ends. And the request identification corresponds to the user, and the target processing terminal identification is distributed based on the request identification. Therefore, the user requests of the same user can be processed by the same target processing terminal, the problem that data synchronization needs to be additionally carried out due to the fact that different processing terminals process the user requests of the same user is solved, and the stability of the processing terminals is improved so that the balance of user request processing is guaranteed. Therefore, the scheme can improve the balance of user request processing.
In an optional implementation manner, before determining the target processing-side identifier allocated to the request identifier, the user request processing method provided in the embodiment of the present application may further include the following steps:
based on the user request, obtaining a routing rule matching condition, and determining a target routing rule reaching the routing rule matching condition from a plurality of preset routing rules;
analyzing the target routing rule, and acquiring a target extraction mode of the request identifier based on an analysis result;
and extracting the request identification from the user request by using a target extraction mode.
In a particular application, user requests are typically handled by a gateway. And the gateway often handles a large number of different user requests, and different routing rules can be configured for different user requests. Therefore, in order to improve the accuracy, the gateway may obtain the routing rule matching condition based on the user request, and determine the target routing rule that reaches the routing rule matching condition from the plurality of preset routing rules. The routing rule matching condition can comprise routing information such as a request domain name host, a URL path regular expression and the like in a user request, and the routing information conforms to corresponding preset routing information. Furthermore, based on the analysis result, the target extraction manner for obtaining the request identifier may be various, and is specifically described in the form of an alternative embodiment below.
In an alternative embodiment, the parsing result includes: the corresponding relation between the request type and the extraction mode;
based on the analysis result, the obtaining of the target extraction manner of the request identifier may specifically include the following steps:
identifying a request type of a user request;
and searching the extraction mode corresponding to the request type requested by the user from the corresponding relation between the request type and the extraction mode to obtain the target extraction mode.
The request type of the user request may be divided according to the difference of the request mode of the user request for the service, the difference of the service identifier carried in the user request, and the like. For example, the manner of requesting the service by the user may include: and the user requests the service in a third-party service callback interface mode, and the user requests the service in a self-defined interface mode of the service. Therefore, the request type, i.e. the service type service _ type, may include 2 types: a callback (callback) type, and a normal (normal) type. Therefore, different extraction modes of the request identification are executed according to different types of user requests, the accuracy of obtaining the request identification can be improved, and the method and the device are suitable for scenes of various service request modes.
In another alternative embodiment, the parsing result includes: requesting the extraction mode of the identifier;
accordingly, the above target extraction method for obtaining the request identifier based on the analysis result may specifically include the following steps:
and extracting the target extraction mode of the request identification from the analysis result.
The embodiment is suitable for extracting the same request identifier from all the user requests. Thus, the identification of the request type is not required, and the efficiency can be improved.
In an optional implementation manner, the extracting, by using the target extraction method, the request identifier from the user request may specifically include the following steps:
if the callback interface does not have the extended parameters, extracting a first parameter value of a first specified parameter name in the user request to obtain a request identifier;
and if the interface of the callback has the extended parameters, extracting second parameter values of second specified parameter names in the user request, and extracting parameter values except the second parameter values from the parameter values of the splicing parameters containing the second parameter values to obtain a request identifier.
In a specific application, the embodiment may be applied to a scenario in which the request type is an outsource system callback (callback) type, or a scenario in which the request type does not need to be identified. Any specified parameter name can be used to indicate that the parameter value is a parameter name of the acquisition request identifier, such as a user ID, and the difference is that different specified parameters correspond to each other. Illustratively, the first specified parameter name may be callback _ parameter _ key, and the second specified parameter name may be callback _ parameter _ user _ id _ key. Also, the request mode depends on the callback protocol (body _ protocol), such as submission in get mode (query _ string), form in post mode (post _ form), and post _ xml (post soap).
Illustratively, the field is callback _ parameter _ key, indicating that no extension parameter exists for the third party service callback interface, i.e., the forecourt system callback interface. When the service _ type is callback, the gateway will obtain the value of the designated parameter name from the request parameter, i.e. the user ID value, according to the value designated by the callback _ parameter _ key. The field is callback _ parameter _ user _ ID _ key, indicating that the service will splice the user ID in a parameter other than the user ID and pass back to the service itself. Therefore, according to the value specified by the callback _ parameter _ key, the gateway can obtain the value of the specified parameter name from the request parameter, and then analyze the spliced parameter value according to the parameter configured by the callback _ parameter _ user _ ID _ key, so as to obtain the user ID. Illustratively, the request parameters in the user request are: the pass _ parameters value1& uid 123, and the gateway can obtain the user ID value 123 through the callback _ parameter _ key and the callback _ parameter _ user _ ID _ key. And the rule how to obtain the request identifier corresponding to the above parameters can be configured in advance, and after the gateway analyzes the request identifier according to the configured rule, the request identifier can be stored in the gateway context for the subsequent use of the gateway.
In an optional implementation manner, the extracting, by using a target extraction manner, a request identifier from a user request includes:
extracting a parameter value of a third specified parameter name in the user request to obtain a request identifier;
and if the extraction fails, extracting the parameter value of the fourth specified parameter name in the user request to obtain a request identifier.
The present embodiment can be applied to a scenario in which the request type is an internal service invocation (normal) type, or a scenario in which identification of the request type is not required. Illustratively, the parameter value parameter _ user _ id _ key of the third specified parameter name represents a request identity, e.g. a user identity userId. The parameter value parameter _ user _ id _ second _ key for the fourth specified parameter name represents an alternative parameter for the user identification userId and may be considered to be a mid parameter. That is, in the case where the gateway acquires the user ID value, it is preferentially acquired from the uid parameter, that is, the third specifying parameter, and if the extraction fails, for example, the uid parameter does not exist, it acquires the user identification from the mid parameter.
In an optional implementation manner, the user request processing method provided in the embodiment of the present application may further include the following steps:
based on the user request, obtaining a routing rule matching condition, and determining a target routing rule reaching the routing rule matching condition from a plurality of preset routing rules;
analyzing the target routing rule to obtain disaster tolerance information corresponding to the request identification;
invoking an instance in the target processing terminal to process the user request may specifically include the following steps:
if the state information of the target processing terminal accords with the preset unavailable index, determining whether an available processing terminal exists according to the disaster tolerance information;
if the request exists, calling the instance in the available processing end to process the user request.
In a specific application, the preset unavailability index may include: the status information is "unavailable", the target processing terminal does not return response information of heartbeat detection, and the like. And, the manner of processing the user request by calling the instance in the available processing end is similar to the manner of processing the user request by calling the instance in the target processing end, and the difference is that the processing ends are different. For the same content, see the description of the example in the call target processing end in the foregoing embodiment for processing the user request, which is not described herein again. Thus, the success rate of processing the user request can be improved.
In an optional embodiment, the disaster recovery information includes: the first information is used for representing whether cross-processing end access is allowed or not, and the second information is used for representing whether cross-regional access is allowed or not;
determining whether an available processing end exists according to the disaster tolerance information may specifically include the following steps:
if the first information is that cross-processing-end access is allowed, acquiring an area identifier corresponding to the processing-end identifier, and determining whether the area identifier corresponds to a plurality of processing-end identifiers;
if so, determining whether an available processing end exists in the processing ends corresponding to the plurality of processing end identifications;
and if the available processing end does not exist and the second information is allowed to access across the region, determining whether the available processing end exists in the region with the identifier different from the region identifier.
In a specific application, the first information enable _ unit _ cross _ zone represents whether cross-processing-end zone access is supported or not. For example, if the value of the first information is false, it represents that cross-processing terminal zone access is not supported. The second information enable _ unit _ cross _ region characterizes whether cross-region access is supported. For example, if the value of the second information is false, it represents that cross-region access is not supported. Also, the request traffic, i.e. whether the user requests to allow cross-region access to the service instance, and whether the request traffic allows cross-zone access to the service instance, may be configured in the routing rules in advance. The gateway analyzes the routing rule to obtain disaster tolerance information, and the disaster tolerance information can be stored in the gateway context for the subsequent use of the gateway. Accessing the service instance refers to invoking the service instance to respond to the user request.
In addition, the setting principle of whether to access across regions and whether to access across zones can be set according to specific service conditions without relation with the cross-domain access capability of a specific application scene. Illustratively, the target service is two-place three-center deployment, and the machine room 1: { region: sh, zone: sh001}, the machine room 2: { region: sh, zone: sh002}, and the machine room 3: { region: bj, zone: bj001} can be obtained through service discovery. If a certain user ID falls on the computer room 1 by a unitized request, and only data of a specified URL is stored in the computer room 1, the user request including the specified URL is not allowed to access across regions and zones because data which the user requests to access is not acquired in a computer room other than the computer room 1. For a scenario where data exists in each room, when the room 1 fails, the user request containing the specified URL may be configured to allow cross-reigon and zone access. Thus, cross-region enabled and cross-zone enabled access may be configured for such URLs when routing rules are configured. The advantage of this is that although the user can only access a certain room by unitization, for the case of a room failure, some data is stored in other rooms, and such interfaces should also be accessible for high availability of services.
Fig. 2 is a diagram illustrating an example of a structure of a user request processing system according to an embodiment of the present application, which specifically includes:
the gateway is responsible for carrying out flow distribution and acquiring a routing rule; configuring a routing rule and a forwarding rule by a configuration center Etcd; service registration Discovery; and the Caster platform issues the service and registers the service. Among them, the Caster platform corresponds to a publishing platform of an application. In a specific application, gateways may exist in different rooms in different areas, for example, a gateway may exist in both the shanghai room 1 and the shanghai room 3. The gateways in different areas can forward the user request in the area to the application transaction system in the corresponding machine room for processing. In addition, the computer room can also contain a processing-end database. The application transaction system is only an example, and a system of a corresponding target service can be deployed in a machine room according to specific scene requirements.
On the basis of the user request processing system shown in fig. 2, as shown in fig. 3, an exemplary use case diagram of a user request processing system provided by an embodiment of the present application is shown, where the use case may include the following steps: the configuration center Etcd configures a unitization rule in a routing rule, wherein the rule comprises how to acquire a user ID value and whether cross-region and zone requests are allowed; and configuring the corresponding relation among the area identification, the machine room identification and the unit ID to which each machine room belongs. The gateway obtains the unitization related configuration, carries out Hash groove algorithm on the user ID to calculate the unit ID for unitization routing, screens out the service instance according to the unitization configuration rule, and calls the service instance conforming to load balance to process the user request.
The following description further describes the user request processing method with reference to fig. 4 by taking an application of the user request processing method provided by the present application in a user request processing system as an example. Fig. 4 shows an exemplary processing flow diagram of a user request processing method applied to a user request processing system according to an embodiment of the present application, which specifically includes the following steps:
the method comprises the following steps: configuring a unitization rule: under the condition that the target service is accessed to the gateway, a unitization rule can be configured, wherein the unitization rule comprises whether unitization is performed or not, how to acquire a user ID value, whether cross-region access is allowed or not, whether cross-machine-room access is allowed or not, the corresponding relation between the target service and each machine room region identifier, the machine room identifier and the unit ID, and the like;
step two: matching the route: determining a target routing rule, analyzing routing configuration: analyzing the unitization rule configuration, and storing the analyzed parameter values of unitization, user ID value, cross-machine room access permission and cross-regional access permission in the context of the gateway;
step three: acquiring the context of a gateway;
step four: whether unitization: whether to unitize can be judged according to the context of the gateway.
Step five: if unitization, acquiring a user ID, calculating a unit ID: calculating a unit ID through a Hash slot algorithm according to a user ID stored in the context of the gateway;
step six: acquiring a machine room identifier and an area identifier corresponding to the unit ID: matching an area identifier and a machine room identifier corresponding to the service node through the unit ID;
step seven: inquiring whether the current machine room is available, and if so, executing a step twelve; if the current machine room is unavailable, judging whether cross-machine-room access is allowed; the current machine room is a matched machine room obtained through the context of the gateway;
step eight: if the cross-machine-room access is not allowed, directly returning 502 to the exception; if the cross-machine-room access is allowed, judging whether a machine room other than the current machine room exists in the same area and whether the current machine room is available;
step nine: if the same area has machine rooms except the current machine room and the current machine room is available, executing the step twelve; if the same area has no machine room outside the current machine room or the current machine room is unavailable, whether cross-area access is allowed or not can be continuously judged; wherein, the same area, namely the current area, is the area to which the current machine room belongs;
step ten: if cross-region access is not allowed, an exception is returned 502 directly. If cross-region access is allowed, whether a machine room exists in a region outside the current region or not and whether the machine room is available or not can be judged;
step eleven: if the region outside the current region has a computer room available, executing step twelve; if no machine room is available in the area outside the current area, directly returning 502 to the abnormity;
step twelve: executing a load balancing strategy; therefore, the user request can be sent to the instance meeting the load balancing strategy in the target computer room, namely the instance is called to process the user request.
And, in the scene that unitization is not needed, the effective service node can be directly returned to screening, and the above-mentioned step twelve is executed. Wherein, screening the effective service node means determining the registered instance information from the processing terminal. The load balancing policy of this embodiment is the load balancing condition in the embodiment of fig. 1. The steps of this embodiment are similar to the steps in the embodiment of fig. 1 and the alternative embodiment, except that this embodiment adopts a different description form for easy understanding. For the same parts, reference may be made to the description of the embodiment and the alternative embodiment of fig. 1, which are not described herein again.
In a specific application, in order to improve efficiency of obtaining the service node information, before the step three, the following steps may be further included: service discovery: and acquiring service node information by using service discovery. The service node information may include an area identifier and a machine room identifier, the obtained service node information may be stored in the context of the gateway, and the service node may specifically be a processing end and/or an instance in the processing end. The acquiring of the service node information by using the service discovery may specifically include: acquiring service node information according to the service tree; whether a corresponding cache exists; if the service node exists, returning the service node information in the cache; if not, performing service discovery to acquire service node information by using the service, and further caching the service node information; and returning the service node information. Returning service node information refers to returning service node information to the gateway. Therefore, the context of the gateway can be successfully acquired by executing the third step, and the cached service node information can reduce the time for acquiring the service node information in the next processing of the user request of the same user, thereby improving the efficiency. The service tree refers to a tree record storing service node information.
The unitization is to divide the service into small service units, the functions of each unit are completely the same, but only a part of data can be processed, the data of all the units are combined to be complete data, and the inside of each unit can process a complete service flow. Illustratively, information enable _ unit _ default _ deployment may also be configured: indicating whether the processing of the requested traffic, i.e. the user request, is unitized, typically true, i.e. unitized. When the gateway performs the routing rule analysis, the analysis result may be stored in the gateway context. Whether the request traffic is unitized or not may be configured in the service route in advance, i.e., may be configured in the routing rule. Therefore, the setting principle of unitization is independent of the specific traffic of the access gateway and only depends on the attribute of the gateway matched with the routing rule. For example, according to historical experience, part of the gateways handle more traffic: the number of the user requests is greater than or equal to a preset number threshold, the receiving frequency of the user requests is greater than or equal to a preset frequency threshold, and the like, so that the routing rules matched with the gateway are set to be unitized, and otherwise, the routing rules are set not to be unitized.
Moreover, the Lua development gateway configuration service interface can be used to obtain the parameter name of the user ID and the algorithm policy of the specific parameter value based on OpenResty, that is, how to obtain the configuration request identifier. And on the basis of OpenResty, using Lua development to calculate a Hash slot algorithm through the user ID value, acquiring which machine room for analyzing the traffic request, and executing a screening rule of a load balancing algorithm to process the traffic, namely the user request, by the service instance in the corresponding machine room. The game gateway can be obtained by secondary development based on open source Apache APISIX. The language and development platform may be OpenResty. OpenResty is a high-performance Web platform based on Nginx and Lua, and therefore, the OpenResty can be developed and implemented through the Lua language.
Corresponding to the above method embodiment, the present application further provides an embodiment of a user request processing apparatus, and fig. 5 shows a schematic structural diagram of a user request processing apparatus provided in an embodiment of the present application. As shown in fig. 5, the apparatus includes:
a receiving module 502 configured to receive a user request of a target service, wherein the user request includes a request identifier corresponding to a user;
an allocating module 504, configured to determine, based on the request identifier, a target processing end identifier allocated to the request identifier according to a rule for uniformly allocating a preset number of processing end identifiers, where the preset number of processing end identifiers are uniformly allocated to each processing end;
and the processing module 506 is configured to search for a target processing end with the processing end identifier, and invoke an instance in the target processing end to process the user request.
According to the embodiment of the application, the preset number of processing end identifiers are uniformly distributed to the processing ends, and the processing ends can be guaranteed to uniformly bear user requests. And according to the rule of uniformly distributing the processing end identifiers with the preset number, determining the target processing end identifiers distributed to the request identifiers, further searching the target processing end with the processing end identifiers, calling the instance in the target processing end to process the user request, and ensuring that the user request is uniformly distributed to different processing ends. And the request identification corresponds to the user, and the target processing terminal identification is distributed based on the request identification. Therefore, the user requests of the same user can be processed by the same target processing terminal, the problem that data synchronization needs to be additionally carried out due to the fact that different processing terminals process the user requests of the same user is solved, and the stability of the processing terminals is improved so that the balance of user request processing is guaranteed. Therefore, the scheme can improve the balance of user request processing.
In an optional implementation, the allocating module 504 is further configured to:
inputting the request identification into a hash slot algorithm to obtain a target hash slot;
and searching the target processing end identification corresponding to the target hash groove from the pre-established corresponding relation between the hash groove and the processing end identification.
In an optional implementation, the allocating module 504 is further configured to:
before the target processing terminal identification distributed to the request identification is determined, acquiring a routing rule matching condition based on the user request, and determining a target routing rule reaching the routing rule matching condition from a plurality of preset routing rules;
analyzing the target routing rule, and acquiring a target extraction mode of the request identifier based on an analysis result;
and extracting the request identification from the user request by using the target extraction mode.
In an optional implementation, the parsing result includes: the corresponding relation between the request type and the extraction mode;
the assignment module 504, further configured to:
identifying a request type of the user request;
and searching the extraction mode corresponding to the request type of the user request from the corresponding relation between the request type and the extraction mode to obtain the target extraction mode.
In an alternative embodiment, the assigning module 504 is further configured to:
if the callback interface does not have the extended parameters, extracting a first parameter value of a first specified parameter name in the user request to obtain the request identifier;
and if the interface of the callback has the extended parameters, extracting a second parameter value of a second specified parameter name in the user request, and extracting parameter values except the second parameter value from the parameter values of the splicing parameters containing the second parameter value to obtain the request identifier.
In an alternative embodiment, the assigning module 504 is further configured to:
extracting a parameter value of a third appointed parameter name in the user request to obtain the request identifier;
and if the extraction fails, extracting the parameter value of the fourth specified parameter name in the user request to obtain the request identifier.
In an alternative embodiment, the apparatus further comprises: a routing rule parsing module configured to:
based on the user request, obtaining a routing rule matching condition, and determining a target routing rule reaching the routing rule matching condition from a plurality of preset routing rules;
analyzing the target routing rule to obtain disaster tolerance information corresponding to the request identification;
accordingly, the processing module 506 is further configured to:
if the state information of the target processing terminal meets a preset unavailable index, determining whether an available processing terminal exists according to the disaster tolerance information;
if the user request exists, calling an instance in the available processing end to process the user request.
In an optional implementation manner, the disaster recovery information includes: the first information is used for representing whether cross-processing end access is allowed or not, and the second information is used for representing whether cross-regional access is allowed or not;
accordingly, the processing module 506 is further configured to:
if the first information is allowed to be accessed across the processing terminals, acquiring area identifications corresponding to the processing terminal identifications, and determining whether the area identifications correspond to a plurality of processing terminal identifications;
if so, determining whether an available processing end exists in the processing ends corresponding to the plurality of processing end identifications;
and if the available processing end does not exist and the second information is allowed to access across the region, determining whether the available processing end exists in the region with the identifier different from the region identifier.
In an optional implementation, the processing module 506 is further configured to:
acquiring a plurality of registered example information, and searching target example information corresponding to the processing terminal identification from the plurality of example information;
and calling an instance reaching a preset load balancing condition from a plurality of instances with the target instance information of the target processing terminal to process the user request.
The above is an exemplary scheme of a user request processing apparatus according to the present embodiment. It should be noted that the technical solution of the user request processing apparatus and the technical solution of the user request processing method described above belong to the same concept, and details that are not described in detail in the technical solution of the user request processing apparatus can be referred to the description of the technical solution of the user request processing method described above.
Fig. 6 shows a block diagram of a computing device according to an embodiment of the present application. The components of the computing device 600 include, but are not limited to, a memory 610 and a processor 620. The processor 620 is coupled to the memory 610 via a bus 630 and a database 650 is used to store data.
Computing device 600 also includes access device 640, access device 640 enabling computing device 600 to communicate via one or more networks 660. Examples of such networks include a Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. The Access device 640 may include one or more of any type of Network Interface (e.g., a Network Interface Controller (NIC)) whether wired or Wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) Wireless Interface, a Worldwide Interoperability for Microwave Access (Wi-MAX) Interface, an ethernet Interface, a Universal Serial Bus (USB) Interface, a cellular Network Interface, a bluetooth Interface, a Near Field Communication (NFC) Interface, and so forth.
In one embodiment of the present application, the above-described components of computing device 600, as well as other components not shown in FIG. 6, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device architecture shown in FIG. 6 is for purposes of example only and is not limiting as to the scope of the present application. Those skilled in the art may add or replace other components as desired.
Computing device 600 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smartphone), wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 600 may also be a mobile or stationary server.
Wherein, the processor 620 implements the steps of the user request processing method when executing the instructions.
The above is an illustrative scheme of a computing device of the present embodiment. It should be noted that the technical solution of the computing device and the technical solution of the user request processing method described above belong to the same concept, and details that are not described in detail in the technical solution of the computing device can be referred to the description of the technical solution of the user request processing method described above.
An embodiment of the present application also provides a computer-readable storage medium storing computer instructions, which when executed by a processor implement the steps of the user request processing method as described above.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium belongs to the same concept as the technical solution of the user request processing method, and details that are not described in detail in the technical solution of the storage medium can be referred to the description of the technical solution of the user request processing method.
The foregoing description of specific embodiments of the present application has been presented. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present application disclosed above are intended only to aid in the explanation of the application. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and its practical applications, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and their full scope and equivalents.

Claims (12)

1. A user request processing method is characterized by comprising the following steps:
receiving a user request of a target service, wherein the user request comprises a request identifier corresponding to a user;
determining target processing end identifications distributed to the request identifications according to a rule for uniformly distributing a preset number of processing end identifications based on the request identifications, wherein the preset number of processing end identifications are uniformly distributed to each processing end;
and searching a target processing terminal with the processing terminal identification, and calling an instance in the target processing terminal to process the user request.
2. The method of claim 1, wherein the determining, based on the request identifier, a target processing-side identifier allocated to the request identifier according to a rule for uniformly allocating a preset number of processing-side identifiers comprises:
inputting the request identification into a hash slot algorithm to obtain a target hash slot;
and searching the target processing end identification corresponding to the target hash groove from the pre-established corresponding relation between the hash groove and the processing end identification.
3. The method according to claim 1 or 2, wherein prior to said determining a target handler identity assigned to said request identity, the method further comprises:
based on the user request, obtaining a routing rule matching condition, and determining a target routing rule reaching the routing rule matching condition from a plurality of preset routing rules;
analyzing the target routing rule, and acquiring a target extraction mode of the request identifier based on an analysis result;
and extracting the request identification from the user request by using the target extraction mode.
4. The method of claim 3, wherein the parsing comprises: the corresponding relation between the request type and the extraction mode;
the obtaining of the target extraction mode of the request identifier based on the analysis result includes:
identifying a request type of the user request;
and searching the extraction mode corresponding to the request type of the user request from the corresponding relation between the request type and the extraction mode to obtain the target extraction mode.
5. The method according to claim 3, wherein said extracting the request identifier from the user request by using the target extraction manner comprises:
if the callback interface does not have the extended parameters, extracting a first parameter value of a first specified parameter name in the user request to obtain the request identifier;
and if the interface of the callback has the extended parameters, extracting a second parameter value of a second specified parameter name in the user request, and extracting parameter values except the second parameter value from the parameter values of the splicing parameters containing the second parameter value to obtain the request identifier.
6. The method according to claim 3, wherein said extracting the request identifier from the user request by using the target extraction manner comprises:
extracting a parameter value of a third appointed parameter name in the user request to obtain the request identifier;
and if the extraction fails, extracting the parameter value of the fourth specified parameter name in the user request to obtain the request identifier.
7. The method of any of claims 1-2, 4-6, further comprising:
based on the user request, obtaining a routing rule matching condition, and determining a target routing rule reaching the routing rule matching condition from a plurality of preset routing rules;
analyzing the target routing rule to obtain disaster tolerance information corresponding to the request identification;
the invoking the instance in the target processing terminal to process the user request comprises:
if the state information of the target processing terminal meets a preset unavailable index, determining whether an available processing terminal exists according to the disaster tolerance information;
if the user request exists, calling an instance in the available processing end to process the user request.
8. The method of claim 7, wherein the disaster recovery information comprises: the first information is used for representing whether cross-processing end access is allowed or not, and the second information is used for representing whether cross-regional access is allowed or not;
the determining whether an available processing end exists according to the disaster recovery information includes:
if the first information is allowed to be accessed across the processing terminals, acquiring area identifications corresponding to the processing terminal identifications, and determining whether the area identifications correspond to a plurality of processing terminal identifications;
if so, determining whether an available processing end exists in the processing ends corresponding to the plurality of processing end identifications;
and if the available processing end does not exist and the second information is allowed to access across the region, determining whether the available processing end exists in the region with the identifier different from the region identifier.
9. The method of any of claims 1-2, 4-6, and 8, wherein said invoking the instance in the target processing side to process the user request comprises:
acquiring a plurality of registered example information, and searching target example information corresponding to the processing terminal identification from the plurality of example information;
and calling an instance reaching a preset load balancing condition from a plurality of instances with the target instance information of the target processing terminal to process the user request.
10. A user request processing apparatus, comprising:
the system comprises a receiving module, a sending module and a receiving module, wherein the receiving module is configured to receive a user request of a target service, and the user request comprises a request identifier corresponding to a user;
the distribution module is configured to determine a target processing end identifier distributed to the request identifier according to a rule for uniformly distributing a preset number of processing end identifiers based on the request identifier, wherein the preset number of processing end identifiers are uniformly distributed to each processing end;
and the processing module is configured to search for a target processing terminal with the processing terminal identification and call an instance in the target processing terminal to process the user request.
11. A computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any one of claims 1-9 when executing the instructions.
12. A computer-readable storage medium storing computer instructions, which when executed by a processor, perform the steps of the method of any one of claims 1 to 9.
CN202210014150.2A 2022-01-06 2022-01-06 User request processing method and device Pending CN114356575A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210014150.2A CN114356575A (en) 2022-01-06 2022-01-06 User request processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210014150.2A CN114356575A (en) 2022-01-06 2022-01-06 User request processing method and device

Publications (1)

Publication Number Publication Date
CN114356575A true CN114356575A (en) 2022-04-15

Family

ID=81107700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210014150.2A Pending CN114356575A (en) 2022-01-06 2022-01-06 User request processing method and device

Country Status (1)

Country Link
CN (1) CN114356575A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114844951A (en) * 2022-04-22 2022-08-02 百果园技术(新加坡)有限公司 Request processing method, system, device, storage medium and product

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114844951A (en) * 2022-04-22 2022-08-02 百果园技术(新加坡)有限公司 Request processing method, system, device, storage medium and product
CN114844951B (en) * 2022-04-22 2024-03-19 百果园技术(新加坡)有限公司 Request processing method, system, device, storage medium and product

Similar Documents

Publication Publication Date Title
US11646939B2 (en) Network function NF management method and NF management device
US10630808B1 (en) Contextual routing for directing requests to different versions of an application
CN109428749A (en) Network management and relevant device
CN112995273B (en) Network call-through scheme generation method and device, computer equipment and storage medium
CN111352716B (en) Task request method, device and system based on big data and storage medium
CN107172214B (en) Service node discovery method and device with load balancing function
CN113301079B (en) Data acquisition method, system, computing device and storage medium
CN110557336A (en) Addressing routing method and system
CN111752681A (en) Request processing method, device, server and computer readable storage medium
CN204695386U (en) Towards the management information system of many tenants
CN113630479A (en) Domain name resolution method and related product
US9967232B1 (en) Network traffic management system using customer policy settings
US20110153826A1 (en) Fault tolerant and scalable load distribution of resources
CN114356575A (en) User request processing method and device
US11108854B2 (en) Peer-to-peer network for internet of things resource allocation operation
CN110958180A (en) Gateway routing method, intelligent gateway, electronic device and computer storage medium
CN108347465B (en) Method and device for selecting network data center
US7543300B2 (en) Interface for application components
US10231269B2 (en) Dynamic generation of geographically bound manet IDs
CN115622976A (en) Domain name management system, domain name registration and resolution method, device, equipment and medium
CN115004657B (en) Addressing method, addressing system and addressing device
US10958580B2 (en) System and method of performing load balancing over an overlay network
CN115242791A (en) Service platform access method, device, equipment and storage medium
Usmanova et al. Cloud and fog computing: Challenging issues for internet of things and big data
US20230129604A1 (en) Open Edge Cloud Platform for Location-Sensitive Applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination