CN114143269A - HTTP request distribution method, device, equipment and medium - Google Patents

HTTP request distribution method, device, equipment and medium Download PDF

Info

Publication number
CN114143269A
CN114143269A CN202111354001.2A CN202111354001A CN114143269A CN 114143269 A CN114143269 A CN 114143269A CN 202111354001 A CN202111354001 A CN 202111354001A CN 114143269 A CN114143269 A CN 114143269A
Authority
CN
China
Prior art keywords
target
http request
service
strategy
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111354001.2A
Other languages
Chinese (zh)
Inventor
徐凯希
江武
杨冬
李良婷
李晶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Tuhu Information Technology Co ltd
Original Assignee
Shanghai Tuhu Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Tuhu Information Technology Co ltd filed Critical Shanghai Tuhu Information Technology Co ltd
Priority to CN202111354001.2A priority Critical patent/CN114143269A/en
Publication of CN114143269A publication Critical patent/CN114143269A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)

Abstract

The application discloses a method, a device, equipment and a medium for shunting HTTP (hyper text transport protocol) requests, which comprise the following steps: acquiring a target HTTP request sent by a source service; analyzing the target HTTP request to analyze target key information, and combining the target key information into target data in a preset format; matching the target data with a preset distribution strategy to obtain a target distribution strategy matched with the target data; determining a target service corresponding to the target HTTP request by using the target shunting strategy and the target data; and forwarding the target HTTP request to the target service. The method and the device can avoid the influence of different services, have universality, and improve the effectiveness and accuracy of the configured shunting strategy.

Description

HTTP request distribution method, device, equipment and medium
Technical Field
The present application relates to the field of request offloading technologies, and in particular, to a method, an apparatus, a device, and a medium for HTTP request offloading.
Background
In the internet technology, products are continuously developed, technology selection of corresponding services is continuously changed, the change of a technology stack needs to reconstruct and reform old services, and after new services are developed, new and old services can be transited. Mismatch from expectations may occur due to the direct release of new services to production and create service stability and availability issues. Therefore, it is necessary to shunt part of the traffic of the old service to the new service, and gradually adjust the traffic to perform the handover after the connection according to whether the expected effect is achieved, so as to achieve the purpose of smooth handover. In order to ensure smooth switching between new and old services and no perception of an upstream calling party, a better method is to shunt the two versions and perform effect analysis through testing.
Currently, a common traffic distribution technique is a/B testing and load balancing. The A/B test is a product optimization method, namely in the same time dimension, visitor groups with the same (or similar) composition are respectively made to randomly access various versions of services, user experience data and service data of various groups are collected, and finally, the best version is analyzed and evaluated to be formally adopted. The technology for realizing traffic distribution based on load balancing is that after receiving a client request, the load balancing distributes traffic to an application server under the load according to a forwarding strategy. The A/B test has the defects that: the corresponding shunting mechanism is separately set for each service scene, a large amount of labor cost is consumed, and therefore the A/B test is more used for comparison tests of different functional products. The disadvantage of the traffic distribution method based on load balancing is that the configurability of the forwarding strategy is weak.
Disclosure of Invention
In view of this, an object of the present application is to provide a method, an apparatus, a device and a medium for offloading an HTTP request, which can avoid the influence of different services, have universality, and improve the effectiveness and accuracy of an configured offloading policy. The specific scheme is as follows:
in a first aspect, the present application discloses an HTTP request splitting method, including:
acquiring a target HTTP request sent by a source service;
analyzing the target HTTP request to analyze target key information, and combining the target key information into target data in a preset format;
matching the target data with a preset distribution strategy to obtain a target distribution strategy matched with the target data;
determining a target service corresponding to the target HTTP request by using the target shunting strategy and the target data;
and forwarding the target HTTP request to the target service.
Optionally, the obtaining of the target HTTP request sent by the source service includes:
intercepting an HTTP request based on a specified domain name sent by a source service;
judging whether the HTTP request meets a preset screening condition;
and if so, determining the HTTP request as a target HTTP request.
Optionally, the matching the target data with a preset offloading policy to obtain a target offloading policy matched with the target data includes:
determining interface information according to the target data;
and matching the interface information with a preset distribution strategy to obtain a target distribution strategy matched with the interface information.
Optionally, the matching the interface information with a preset offloading policy to obtain a target offloading policy matched with the interface information includes:
matching the interface information with a preset distribution strategy in a local cache to obtain a target distribution strategy matched with the interface information;
and if the target distribution strategy does not exist in the local cache, matching the interface information with the preset distribution strategy in the database to obtain the target distribution strategy matched with the interface information.
Optionally, the method further includes:
and acquiring a change shunting strategy, modifying the database according to the change shunting strategy, and modifying target information in the zookeeper so as to trigger a change event by the zookeeper and inform each shunting server of refreshing the change shunting strategy to a cache of the shunting server from the database.
Optionally, the determining, by using the target offloading policy and the target data, a target service corresponding to the target HTTP request includes:
and determining the target service corresponding to the target HTTP request according to the distribution rule in the target distribution strategy and the target data.
Optionally, after forwarding the target HTTP request to the target service, the method further includes:
obtaining return data of the target service, and returning the return data to the source service;
if the request is abnormal, returning a preset state code to the source service;
wherein, different abnormal types correspond to different preset state codes.
In a second aspect, the present application discloses an HTTP request forking apparatus, which includes:
the target request acquisition module is used for acquiring a target HTTP request sent by a source service;
the target request analysis module is used for analyzing the target HTTP request to analyze target key information and combining the target key information into target data in a preset format;
the distribution strategy matching module is used for matching the target data with a preset distribution strategy to obtain a target distribution strategy matched with the target data;
a target service determination module, configured to determine, by using the target offloading policy and the target data, a target service corresponding to the target HTTP request;
and the target request forwarding module is used for forwarding the target HTTP request to the target service.
In a third aspect, the present application discloses an electronic device, comprising:
a memory for storing a computer program;
and the processor is used for executing the computer program to realize the HTTP request shunting method.
In a fourth aspect, the present application discloses a computer-readable storage medium for storing a computer program, which when executed by a processor implements the aforementioned HTTP request forking method.
Therefore, the target HTTP request sent by a source service is firstly acquired, then the target HTTP request is analyzed to analyze target key information, the target key information is combined into target data in a preset format, then the target data is matched with a preset distribution strategy to obtain a target distribution strategy matched with the target data, the target distribution strategy and the target data are utilized to determine a target service corresponding to the target HTTP request, and finally the target HTTP request is forwarded to the target service. That is, in the present application, a preset offloading policy corresponding to the request is matched according to the target key information in the received target HTTP request, and the request is forwarded to the target service according to the preset offloading policy matched with the target HTTP request, so that the offloading policy is related to the request, the influence of different services can be avoided, the universality is provided, and the effectiveness and accuracy of the configured offloading policy are improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of an HTTP request offloading method provided in the present application;
fig. 2 is a schematic diagram of a specific offloading policy configuration provided in the present application;
fig. 3 is a flowchart of a specific HTTP request splitting method provided in the present application;
fig. 4 is a flowchart of a specific HTTP request splitting method provided in the present application;
fig. 5 is a schematic structural diagram of an HTTP request offloading device provided in the present application;
fig. 6 is a block diagram of an electronic device provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Currently, a common traffic distribution technique is a/B testing and load balancing. The A/B test is a product optimization method, namely in the same time dimension, visitor groups with the same (or similar) composition are respectively made to randomly access various versions of services, user experience data and service data of various groups are collected, and finally, the best version is analyzed and evaluated to be formally adopted. The technology for realizing traffic distribution based on load balancing is that after receiving a client request, the load balancing distributes traffic to an application server under the load according to a forwarding strategy. The A/B test has the defects that: the corresponding shunting mechanism is separately set for each service scene, a large amount of labor cost is consumed, and therefore the A/B test is more used for comparison tests of different functional products. The disadvantage of the traffic distribution method based on load balancing is that the configurability of the forwarding strategy is weak. Therefore, the HTTP request shunting scheme can avoid the influence of different services, has universality, and improves the effectiveness and accuracy of the configured shunting strategy.
Referring to fig. 1, an embodiment of the present application discloses an HTTP request splitting method, including:
step S11: and acquiring a target HTTP request sent by the source service.
In a specific embodiment, an HTTP request based on a specified domain name sent by a source service may be intercepted; judging whether the HTTP request meets a preset screening condition; and if so, determining the HTTP request as a target HTTP request.
Wherein the specified domain name is the original domain name of the old service.
Moreover, the preset screening condition can be a target HTTP request non-redirection request or a front-end resource request;
it should be noted that the preset screening condition may be determined according to an actual scenario, for example, if the new service does not relate to the front-end resource request, the front-end resource request is directly forwarded to the old service, and is not used as the target HTTP request.
Step S12: and analyzing the target HTTP request to analyze target key information, and combining the target key information into target data in a preset format.
The target key information may include a request mode, a request parameter, Header information, and the like. And combining the target key information into a data entity with a preset format corresponding to the shunting strategy.
Step S13: and matching the target data with a preset distribution strategy to obtain a target distribution strategy matched with the target data.
In a specific implementation, interface information may be determined according to the target data; and matching the interface information with a preset distribution strategy to obtain a target distribution strategy matched with the interface information.
That is, in the embodiment of the present application, the offloading policy configuration may be performed in the requested interface dimension, where each interface has a preset offloading policy corresponding to the interface.
Further, in a specific embodiment, the preset offloading policy corresponding to each interface information may include one or more offloading policies, such as one or more of a request mode policy, a parameter white list policy, a parameter proportion policy, and a traffic proportion policy. And the strategy priority can be set, and the matched preset shunting strategies are sequentially used for determining the target service according to the strategy priority until the target service is determined.
The request mode strategy is a strategy for determining target service according to the request mode of the interface; the parameter white list strategy is a strategy for determining target service according to whether the specified parameters contained in the interface meet the white list; the parameter proportion strategy is a strategy for determining a target service according to a hash value of a requested specified parameter and a preset proportion parameter; the traffic proportion policy is a policy that proportionally shunts requests to target services. Any shunting strategy comprises corresponding shunting rules,
for example: and the parameter white list strategy analyzes specified parameters such as user identification after receiving the request. When the target identification of the request is in the parameter white list configuration list, the request is shunted to the new service, and when the target identification of the request is not in the parameter white list configuration list, the request is shunted to the old service.
For example, a parameter scaling strategy; the preset proportion parameter is percentage 1:
if percentage 1 is 0: all requests are offloaded to the old service.
If 0< percent1< 100: receiving a request of a source service, analyzing a specified parameter, such as a user identifier, calculating a hash value m of the parameter, performing bit operation on m, performing right shift by 16 bits to obtain a value k, and performing AND calculation on m and k to obtain n; if n < ═ percent: offloading the request to a new service; if n > percent: the offload system offloads the request to the old service.
If percentage 1 is 100: all requests are offloaded to the new service.
For example, in the flow rate proportion strategy, the preset proportion parameter is percentage 2:
and after receiving the request, generating a random number between 1 and 100, shunting the request to a new service if the random number is less than or equal to percent2, and shunting the request to an old service if the random number is not more than the percent 2.
In a specific implementation manner, the interface information is matched with a preset distribution strategy in a local cache in the embodiment of the application, so that a target distribution strategy matched with the interface information is obtained; and if the target distribution strategy does not exist in the local cache, matching the interface information with the preset distribution strategy in the database to obtain the target distribution strategy matched with the interface information.
And if the preset shunting strategy matched with the interface information cannot be matched from the database, determining the target service corresponding to the target HTTP request by using a default shunting strategy.
The default offload policy directly specifies the target service, such as specifying the new service as the target service or the old service as the target service.
In a specific implementation manner, in the embodiment of the present application, a change offloading policy may be obtained, the database is modified according to the change offloading policy, and target information in the zookeeper is modified, so that the zookeeper triggers a change event, and notifies each offloading server to refresh the change offloading policy from the database into its own cache. Wherein the target information may be interface information.
It should be noted that all the offload servers are all registered to the zookeeper registration node, and when the zookeeper triggers a change event, each offload server monitors the change event, and refreshes the change offload policy in its own cache from the database.
For example, referring to fig. 2, an embodiment of the present application provides a specific offloading policy configuration diagram. All the shunting servers are connected to a zookeeper registration node, any shunting server changes configuration to change shunting strategies, the zookeeper modifies interface information to the zookeeper, and the zookeeper triggers a change event, wherein the change event comprises the shunting servers needing to change the strategies and interface identifiers corresponding to the changed shunting strategies, so that each shunting server searches the corresponding shunting strategies from a database according to the interface identifiers and refreshes a local configuration cache.
Therefore, the strategy matching instantaneity is guaranteed to be achieved in the strategy matching process of the request, the strategy in the cache is obtained instead of being inquired from the DB, I/O interaction is reduced, meanwhile, the problem of cache penetration is solved, DB pressure and I/O interaction loss with the middleware are reduced, and the interface request rate is improved. And the reliability of strategy matching is guaranteed. If the shunting strategy is not matched, a default strategy is allocated, and the request exception caused by the lack of the strategy is avoided.
Step S14: and determining the target service corresponding to the target HTTP request by using the target shunting strategy and the target data.
In a specific implementation manner, the target service corresponding to the target HTTP request may be determined according to the offloading rule in the target offloading policy and the target data.
Step S15: and forwarding the target HTTP request to the target service.
Further, in the embodiment of the present application, return data of the target service may also be obtained, and the return data is returned to the source service.
If the request is abnormal, returning a preset state code to the source service; wherein, different abnormal types correspond to different preset state codes.
It can be seen that, in the embodiment of the present application, a target HTTP request sent by a source service is obtained first, then the target HTTP request is analyzed to analyze target key information, the target key information is combined into target data in a preset format, then the target data is matched with a preset splitting policy to obtain a target splitting policy matched with the target data, a target service corresponding to the target HTTP request is determined by using the target splitting policy and the target data, and finally the target HTTP request is forwarded to the target service. That is, in the present application, a preset offloading policy corresponding to the request is matched according to the target key information in the received target HTTP request, and the request is forwarded to the target service according to the preset offloading policy matched with the target HTTP request, so that the offloading policy is related to the request, the influence of different services can be avoided, the universality is provided, and the effectiveness and accuracy of the configured offloading policy are improved.
Further, in a specific embodiment, the HTTP request offloading scheme in the present application may be implemented by an offloading system.
Accordingly, the specific exception type, i.e., solution, may be as follows:
1. the shunting system has no fault, and the downstream new service business is abnormal. The solution is as follows: and switching the interface dimension to the old service, and completely shunting the request of the upstream source service to the old service cluster by closing the interface shunting configuration master switch to ensure the service availability. That is, the offloading system provided in the embodiment of the present application includes an interface offloading configuration master switch, and when a new service fails, the interface offloading configuration master switch is closed, and the target HTTP request is forwarded to the old service.
2. The shunting system gray scale logic is abnormal. The solution is as follows: and switching to the downstream old service according to the application dimension, and completely shunting the request of the upstream source service to the old service cluster by closing the application switch to ensure the service availability. That is, the offloading system further includes an application switch, and when the offloading system is logically abnormal, the application switch may be turned off to send the target HTTP request to the old service.
3. The shunt system is not available. The solution is as follows: LB (i.e. LoadBalance load balancing) switches directly to the old service downstream.
4. The shunt system is highly loaded. The solution is as follows: the shunting system is accessed to a current-limiting component (sentinel) to limit a QPS (query rate per second) of a shunting interface, so that the load of the shunting system is not too high, and the service availability is ensured.
5. Downstream services (including new and old services) time out or fail. The solution is as follows: the shunting system shunts a request downstream service and dynamically sets request timeout time according to shunting strategy configuration; the offloading system rapidly fails (failfast), and does not perform a retry operation for failure, and the upstream service or the load gateway determines whether to retry (cross-system call, the upstream service needs to consider an abnormal scenario), and the downstream service considers service idempotent, etc.
It can be understood that, in the embodiment of the present application, if an exception occurs in any step of the offload system, the offload system is forwarded to the specified old service by default, so as to avoid a request exception caused by an exception condition of an internal code of the offload system. However, the traditional way of shunting at the LB layer inevitably causes performance loss of the LB layer, and once the LB is applied as the top layer of the service, the forwarding of the whole service is affected once the LB is blocked. Therefore, the scheme provided by the application has higher reliability.
In the following, taking an old service as a Net service and a new service as a Java service as examples, the HTTP request offloading scheme provided in the present application is described in detail. The original domain name of the old service is an A domain name, and a new domain name is created in order to adapt to the distribution system: and B, domain name. The method comprises the steps that the domain name B points to the Net service, the domain name A points to the shunt system, the Java service is registered to a service registration center (Eureka), through adjustment, a service calling party, namely a source service, can firstly reach the shunt system based on a request of the domain name A, after receiving the request, the shunt system can judge whether the current request needs to be forwarded to the Net service or the Java service according to a preset distribution strategy and according to request parameters and request characteristics, and after the request is forwarded, a return value of the Net service or the Java service is responded to the calling party, so that the service shunting effect is achieved. The shunting system is used for intercepting the HTTP request, analyzing HTTP message information of the current request, obtaining target key information including a request mode, request parameter information, Header information and the like, and combining the information into a data entity required by a shunting strategy. In addition, the distribution rules in the distribution strategy can be customized, distribution according to a request mode, distribution according to the proportion of interfaces, distribution according to one or more values of a certain request parameter, distribution support of different interfaces, different rules and default strategy specification are supported. And forwarding the request to the target service according to the shunting strategy, and returning a return value of the downstream service to the calling party. And the distribution strategy and the distribution rule can be configured, and the distribution strategy and the distribution rule are synchronously updated to the memory of each server of the distribution system after the configuration is completed.
For example, referring to fig. 3, fig. 3 is a schematic diagram illustrating a comparison of an HTTP request splitting method provided in the present embodiment, in the HTTP request splitting method in the prior art, an upstream service system, that is, a source service, sends an HTTP request to a NET service based on an original domain name a, and in the scheme provided in the present application, the upstream service system splits the HTTP request sent by the original domain name a to a target service through a splitting system, where the target service is NET, and sends the HTTP request to the NET service based on a new domain name B, and the target service is a Java service, and sends the HTTP request to the Java service through a Java gateway. After the HTTP request reaches the service end through the LB, the new LB, namely the request based on the new domain name B, is directed to the Net service, the original LB, namely the request based on the original domain name A, is directed to the shunt system, the shunt system conducts shunt according to a shunt strategy configured according to interface dimensions and with certain parameter dimensions or flow dimensions and the like, in this way, the original flow is transferred to the shunt system, and the shunt service requests downstream services, namely the Net service or the new Java service according to the strategy.
For example, referring to fig. 4, fig. 4 is a flowchart of a specific HTTP request offloading method provided in an embodiment of the present application. Firstly, judging whether the request is a redirection request or a front-end resource request, if so, directly redirecting or requesting downstream Net service; otherwise, analyzing the key information such as the request header, param, method and the like to form target data in a preset format, acquiring configuration information in the cache, namely a configured shunting strategy according to the interface, and inquiring from a db (database) and caching to a local cache if the configuration information does not exist; and executing a related shunting strategy according to the configuration information, confirming the target url, forwarding the HTTP request, transparently transmitting the key information of the upstream request, if the execution is not abnormal, transparently transmitting the target service return result to the upstream service system, and if the execution is abnormal, returning the custom state code of the shunting system of the upstream service system. The shunting system supports five strategies in total, and the strategies comprise the following steps: request mode strategy: according to the mode of the interface request, directly shunting the request to a target service; parameter white list policy: appointing a target service according to whether certain parameter information contained in the interface meets a white list or not; parameter proportion strategy: carrying out proportional distribution according to a certain parameter hash value to designate a target service; and (3) flow proportion strategy: specifying a target service according to a proportional distribution of requested traffic; the default strategy is as follows: the target service is specified directly.
Specifically, the process of shunting by using the shunting strategy, for example, if an interface shunting switch of a corresponding interface is requested to be turned off: the shunting system sends the request to the old service; and opening the interface shunting switch, shunting by using the white list if the white list is not empty, judging the allocation of a certain parameter shunting proportion if the white list is empty, fully routing the old service if the proportion is empty or 0, shunting to a new service according to the parameter proportion if the proportion is not empty, and shunting the requests of the rest proportion to the old service. That is, in the embodiment of the present application, each interface has a corresponding interface shunt switch.
Therefore, the stable switching of new and old services is realized, the strategy of flow distribution is flexibly supported, the flow proportion or strategy adjustment can be adjusted step by step according to the online effect, and the flexible availability of service requests is ensured. And, independent of upstream system variation, the input of manpower is greatly reduced. The fast switching can be realized: and closed-loop upgrading is realized, the scheduling of an external system is not relied on, and the migration speed of new and old services is greatly improved.
Referring to fig. 5, the present application provides an HTTP request forking apparatus, including:
a target request obtaining module 11, configured to obtain a target HTTP request sent by a source service;
a target request parsing module 12, configured to parse the target HTTP request to parse target key information, and combine the target key information into target data in a preset format;
the distribution strategy matching module 13 is configured to match the target data with a preset distribution strategy to obtain a target distribution strategy matched with the target data;
a target service determining module 14, configured to determine, by using the target offloading policy and the target data, a target service corresponding to the target HTTP request;
and a target request forwarding module 15, configured to forward the target HTTP request to the target service.
It can be seen that, in the embodiment of the present application, a target HTTP request sent by a source service is obtained first, then the target HTTP request is analyzed to analyze target key information, the target key information is combined into target data in a preset format, then the target data is matched with a preset splitting policy to obtain a target splitting policy matched with the target data, a target service corresponding to the target HTTP request is determined by using the target splitting policy and the target data, and finally the target HTTP request is forwarded to the target service. That is, in the present application, a preset offloading policy corresponding to the request is matched according to the target key information in the received target HTTP request, and the request is forwarded to the target service according to the preset offloading policy matched with the target HTTP request, so that the offloading policy is related to the request, the influence of different services can be avoided, the universality is provided, and the effectiveness and accuracy of the configured offloading policy are improved.
The target request acquisition module 11 is specifically configured to intercept an HTTP request based on a specified domain name sent by a source service; judging whether the HTTP request meets a preset screening condition; and if so, determining the HTTP request as a target HTTP request.
The offloading policy matching module 13 specifically includes:
the interface information determining submodule is used for determining interface information according to the target data;
and the distribution strategy matching submodule is used for matching the interface information with a preset distribution strategy to obtain a target distribution strategy matched with the interface information.
The distribution strategy matching submodule is specifically configured to match the interface information with a preset distribution strategy in a local cache to obtain a target distribution strategy matched with the interface information; and if the target distribution strategy does not exist in the local cache, matching the interface information with the preset distribution strategy in the database to obtain the target distribution strategy matched with the interface information.
Further, the apparatus further comprises:
and the shunting strategy configuration module is used for acquiring a changed shunting strategy, modifying the database according to the changed shunting strategy, and modifying target information in the zookeeper so as to trigger a change event by the zookeeper and inform each shunting server of refreshing the changed shunting strategy to a cache of the shunting server from the database.
The target service determining module 14 is specifically configured to determine a target service corresponding to the target HTTP request according to the offloading rule in the target offloading policy and the target data.
Further, the apparatus further comprises:
the data return module is used for acquiring return data of the target service and returning the return data to the source service; if the request is abnormal, returning a preset state code to the source service;
wherein, different abnormal types correspond to different preset state codes.
Referring to fig. 6, an embodiment of the present application discloses an electronic device 20, which includes a processor 21 and a memory 22; wherein, the memory 22 is used for saving computer programs; the processor 21 is configured to execute the computer program, and the HTTP request splitting method disclosed in the foregoing embodiment.
For the specific process of the HTTP request offloading method, reference may be made to corresponding contents disclosed in the foregoing embodiments, and details are not described herein again.
The memory 22 is used as a carrier for resource storage, and may be a read-only memory, a random access memory, a magnetic disk or an optical disk, and the storage mode may be a transient storage mode or a permanent storage mode.
In addition, the electronic device 20 further includes a power supply 23, a communication interface 24, an input-output interface 25, and a communication bus 26; the power supply 23 is configured to provide an operating voltage for each hardware device on the electronic device 20; the communication interface 24 can create a data transmission channel between the electronic device 20 and an external device, and a communication protocol followed by the communication interface is any communication protocol applicable to the technical solution of the present application, and is not specifically limited herein; the input/output interface 25 is configured to obtain external input data or output data to the outside, and a specific interface type thereof may be selected according to a specific application requirement, which is not specifically limited herein.
Further, an embodiment of the present application also discloses a computer-readable storage medium for storing a computer program, wherein the computer program, when executed by a processor, implements the HTTP request offloading method disclosed in the foregoing embodiment.
For the specific process of the HTTP request offloading method, reference may be made to corresponding contents disclosed in the foregoing embodiments, and details are not described herein again.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The HTTP request splitting method, device, apparatus and medium provided by the present application are introduced in detail, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiment is only used to help understand the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An HTTP request offloading method, comprising:
acquiring a target HTTP request sent by a source service;
analyzing the target HTTP request to analyze target key information, and combining the target key information into target data in a preset format;
matching the target data with a preset distribution strategy to obtain a target distribution strategy matched with the target data;
determining a target service corresponding to the target HTTP request by using the target shunting strategy and the target data;
and forwarding the target HTTP request to the target service.
2. The HTTP request forking method according to claim 1, wherein the obtaining of the target HTTP request sent by the source service includes:
intercepting an HTTP request based on a specified domain name sent by a source service;
judging whether the HTTP request meets a preset screening condition;
and if so, determining the HTTP request as a target HTTP request.
3. The HTTP request splitting method according to claim 1, wherein the matching the target data with a preset splitting policy to obtain a target splitting policy matched with the target data includes:
determining interface information according to the target data;
and matching the interface information with a preset distribution strategy to obtain a target distribution strategy matched with the interface information.
4. The HTTP request splitting method according to claim 3, wherein the matching the interface information with a preset splitting policy to obtain a target splitting policy matched with the interface information includes:
matching the interface information with a preset distribution strategy in a local cache to obtain a target distribution strategy matched with the interface information;
and if the target distribution strategy does not exist in the local cache, matching the interface information with the preset distribution strategy in the database to obtain the target distribution strategy matched with the interface information.
5. The HTTP request splitting method according to claim 4, further comprising:
and acquiring a change shunting strategy, modifying the database according to the change shunting strategy, and modifying target information in the zookeeper so as to trigger a change event by the zookeeper and inform each shunting server of refreshing the change shunting strategy to a cache of the shunting server from the database.
6. The HTTP request splitting method according to claim 1, wherein the determining the target service corresponding to the target HTTP request using the target splitting policy and the target data includes:
and determining the target service corresponding to the target HTTP request according to the distribution rule in the target distribution strategy and the target data.
7. The HTTP request forking method according to any one of claims 1 to 6, wherein after forwarding the target HTTP request to the target service, further comprising:
obtaining return data of the target service, and returning the return data to the source service;
if the request is abnormal, returning a preset state code to the source service;
wherein, different abnormal types correspond to different preset state codes.
8. An HTTP request forking apparatus, comprising:
the target request acquisition module is used for acquiring a target HTTP request sent by a source service;
the target request analysis module is used for analyzing the target HTTP request to analyze target key information and combining the target key information into target data in a preset format;
the distribution strategy matching module is used for matching the target data with a preset distribution strategy to obtain a target distribution strategy matched with the target data;
a target service determination module, configured to determine, by using the target offloading policy and the target data, a target service corresponding to the target HTTP request;
and the target request forwarding module is used for forwarding the target HTTP request to the target service.
9. An electronic device, comprising:
a memory for storing a computer program;
a processor for executing the computer program to implement the HTTP request forking method of any one of claims 1 to 7.
10. A computer-readable storage medium storing a computer program which, when executed by a processor, implements the HTTP request forking method of any one of claims 1 to 7.
CN202111354001.2A 2021-11-12 2021-11-12 HTTP request distribution method, device, equipment and medium Pending CN114143269A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111354001.2A CN114143269A (en) 2021-11-12 2021-11-12 HTTP request distribution method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111354001.2A CN114143269A (en) 2021-11-12 2021-11-12 HTTP request distribution method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN114143269A true CN114143269A (en) 2022-03-04

Family

ID=80393448

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111354001.2A Pending CN114143269A (en) 2021-11-12 2021-11-12 HTTP request distribution method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN114143269A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115002218A (en) * 2022-05-26 2022-09-02 平安银行股份有限公司 Traffic distribution method, traffic distribution device, computer equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102711079A (en) * 2011-03-28 2012-10-03 中兴通讯股份有限公司 Method and system for supporting mobility of Internet protocol (IP) shunt connection
CN108566342A (en) * 2018-04-12 2018-09-21 国家计算机网络与信息安全管理中心 Multi-service flow separate system based on SDN frameworks and streamed data processing method
CN109472705A (en) * 2018-09-26 2019-03-15 平安健康保险股份有限公司 Claims Resolution method, system, computer equipment and storage medium
CN110011928A (en) * 2019-04-19 2019-07-12 平安科技(深圳)有限公司 Flow equalization carrying method, device, computer equipment and storage medium
CN111241078A (en) * 2020-01-07 2020-06-05 网易(杭州)网络有限公司 Data analysis system, data analysis method and device
CN111338812A (en) * 2020-01-22 2020-06-26 中国民航信息网络股份有限公司 Data processing method and device
CN111600930A (en) * 2020-04-09 2020-08-28 网宿科技股份有限公司 Micro-service request traffic management method, device, server and storage medium
CN111857974A (en) * 2020-07-30 2020-10-30 江苏方天电力技术有限公司 Service access method and device based on load balancer
CN112019427A (en) * 2020-08-28 2020-12-01 浙江九州云信息科技有限公司 Wireless side edge gateway of mobile cellular network

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102711079A (en) * 2011-03-28 2012-10-03 中兴通讯股份有限公司 Method and system for supporting mobility of Internet protocol (IP) shunt connection
CN108566342A (en) * 2018-04-12 2018-09-21 国家计算机网络与信息安全管理中心 Multi-service flow separate system based on SDN frameworks and streamed data processing method
CN109472705A (en) * 2018-09-26 2019-03-15 平安健康保险股份有限公司 Claims Resolution method, system, computer equipment and storage medium
CN110011928A (en) * 2019-04-19 2019-07-12 平安科技(深圳)有限公司 Flow equalization carrying method, device, computer equipment and storage medium
CN111241078A (en) * 2020-01-07 2020-06-05 网易(杭州)网络有限公司 Data analysis system, data analysis method and device
CN111338812A (en) * 2020-01-22 2020-06-26 中国民航信息网络股份有限公司 Data processing method and device
CN111600930A (en) * 2020-04-09 2020-08-28 网宿科技股份有限公司 Micro-service request traffic management method, device, server and storage medium
CN111857974A (en) * 2020-07-30 2020-10-30 江苏方天电力技术有限公司 Service access method and device based on load balancer
CN112019427A (en) * 2020-08-28 2020-12-01 浙江九州云信息科技有限公司 Wireless side edge gateway of mobile cellular network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115002218A (en) * 2022-05-26 2022-09-02 平安银行股份有限公司 Traffic distribution method, traffic distribution device, computer equipment and storage medium
CN115002218B (en) * 2022-05-26 2023-08-04 平安银行股份有限公司 Traffic distribution method, traffic distribution device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110198363B (en) Method, device and system for selecting mobile edge computing node
CN110198307B (en) Method, device and system for selecting mobile edge computing node
US8892768B2 (en) Load balancing apparatus and load balancing method
USRE45806E1 (en) System and method for the optimization of database access in data base networks
CN102077189B (en) Request routing using network computing components
US11057298B2 (en) Providing differentiated service to traffic flows obscured by content distribution systems
US20090327460A1 (en) Application Request Routing and Load Balancing
CN101326493B (en) Method and device for distributing load of multiprocessor server
EP2853074B1 (en) Methods for optimizing service of content requests and devices thereof
US9432321B2 (en) Method and apparatus for messaging in the cloud
US7844708B2 (en) Method and apparatus for load sharing and data distribution in servers
WO2019052058A1 (en) Domain name redirecting method and system
CN110868323B (en) Bandwidth control method, device, equipment and medium
CN112825524B (en) Method, device and system for determining network service node
CN114143269A (en) HTTP request distribution method, device, equipment and medium
US20230231802A1 (en) Systems and methods for selecting tunnels for transmitting application traffic by an sd-wan application
CN114785737A (en) Message processing method, gateway device, server and storage medium
US20190037044A1 (en) Content distribution and delivery optimization in a content delivery network (cdn)
CN108632680B (en) Live broadcast content scheduling method, scheduling server and terminal
CN113746851B (en) Proxy system and method supporting real-time analysis of GRPC request
CN109347766A (en) A kind of method and device of scheduling of resource
WO2022237670A1 (en) 5g-based edge node scheduling method and apparatus, and medium and device
KR20030046911A (en) A web application sever and method for providing dynamic contents thereof
KR20240099888A (en) System and method for contents routing
CN117834510A (en) Soft elastic load method, device and equipment based on P4 programmable chip

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination