CN111698273B - Method and device for processing request - Google Patents

Method and device for processing request Download PDF

Info

Publication number
CN111698273B
CN111698273B CN201910197384.3A CN201910197384A CN111698273B CN 111698273 B CN111698273 B CN 111698273B CN 201910197384 A CN201910197384 A CN 201910197384A CN 111698273 B CN111698273 B CN 111698273B
Authority
CN
China
Prior art keywords
request
application
data
sending
configuration file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910197384.3A
Other languages
Chinese (zh)
Other versions
CN111698273A (en
Inventor
张开涛
王杰颖
邹子靖
林本兴
田子玉
纪鸿焘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201910197384.3A priority Critical patent/CN111698273B/en
Publication of CN111698273A publication Critical patent/CN111698273A/en
Application granted granted Critical
Publication of CN111698273B publication Critical patent/CN111698273B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/30Profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/566Grouping or aggregating service requests, e.g. for unified processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a method and a device for processing a request, and relates to the technical field of computers. The method, one embodiment includes: receiving a first request from a first transmitting end, and transmitting the first request from the first transmitting end to an application; acquiring data responding to the first request from the application, and storing the data to a gateway cache; and acquiring the data responding to the first request from the gateway cache, and transmitting the data to the first transmitting end. The implementation method ensures that part of requests can be processed when the application is not available, and improves the stability and availability of the service.

Description

Method and device for processing request
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and apparatus for processing a request.
Background
Currently, during the restart process of an application, the application is not available, the cache of the application is also not available, and the request cannot be processed at all. In the prior art, cached data of an application is saved into a database (redis). After the application is restarted, the application is available at the moment, and the request is processed by searching the redis data through the available application, so that the technology solves the problems of high application load and impact on a database after the application is restarted.
In the process of implementing the present invention, the inventor finds that at least the following problems exist in the prior art:
however, in the application restart process, the application is not available at this time, and the technology only guarantees the existence of data, and the unavailable application cannot find the data from the redis, so that the technology cannot solve the problems that in the application restart process, the cache of the application is not available and the request cannot be processed at all.
Disclosure of Invention
In view of the above, the embodiments of the present invention provide a method and an apparatus for processing a request, which can ensure that a part of the request can be processed when an application is unavailable, thereby improving the stability and usability of a service.
To achieve the above object, according to one aspect of an embodiment of the present invention, there is provided a method of processing a request.
The method for processing the request comprises the following steps:
receiving a first request from a first transmitting end, and transmitting the first request from the first transmitting end to an application;
acquiring data responding to the first request from the application, and storing the data to a gateway cache;
and acquiring the data responding to the first request from the gateway cache, and transmitting the data to the first transmitting end.
In one embodiment, after the data is sent to the first sending end, the method further includes:
receiving a second request from a second transmitting end;
inquiring data responding to the second request in the gateway cache according to the second request;
and if the second request is the same as the first request, transmitting the data responding to the first request to the second transmitting end.
In one embodiment, after querying the gateway cache for data in response to the second request according to the second request, further comprising:
if the second request is not the same as the first request, judging whether the application is available or not;
if yes, the second request is sent to the application, data responding to the second request is obtained from the application and is stored in the gateway cache, the data responding to the second request is obtained from the gateway cache, and the data is sent to the second sending end;
if not, the second request is sent to other applications except the application, data responding to the second request is obtained from the other applications and is stored in the gateway cache, and the data responding to the second request is obtained from the gateway cache and is sent to the second sending end.
In one embodiment, before querying the gateway cache for data in response to the second request according to the second request, further comprising:
receiving an address of a configuration file sent by the application;
acquiring the configuration file according to the address of the configuration file, wherein the configuration file comprises a current limiting rule;
judging whether the second request meets the current limiting rule, if so, returning the second request to the second sending end.
In one embodiment, if the configuration file further includes a degradation rule, after the configuration file is obtained according to the address of the configuration file, the method further includes:
and judging whether the second request meets the degradation rule or not, if so, sending preset degradation data to the second sending end.
To achieve the above object, according to another aspect of an embodiment of the present invention, there is provided an apparatus for processing a request.
The device for processing the request in the embodiment of the invention comprises the following components:
the forwarding unit is used for receiving a first request from a first sending end and sending the first request from the first sending end to an application;
a storage unit, configured to obtain data responding to the first request from the application, and store the data in a gateway cache;
And the sending unit is used for acquiring the data responding to the first request from the gateway cache and sending the data to the first sending end.
In one embodiment, further comprising:
a receiving unit, configured to receive a second request from a second transmitting end after transmitting the data to the first transmitting end;
a query unit, configured to query the gateway cache for data in response to the second request according to the second request;
and the processing unit is used for sending the data responding to the first request to the second sending end if the second request is the same as the first request.
In an embodiment, the processing unit is specifically further configured to:
if the second request is not the same as the first request, judging whether the application is available or not;
if yes, the second request is sent to the application, data responding to the second request is obtained from the application and is stored in the gateway cache, the data responding to the second request is obtained from the gateway cache, and the data is sent to the second sending end;
if not, the second request is sent to other applications except the application, data responding to the second request is obtained from the other applications and is stored in the gateway cache, and the data responding to the second request is obtained from the gateway cache and is sent to the second sending end.
In one embodiment, further comprising:
the control unit is used for receiving the address of the configuration file sent by the application before inquiring the data responding to the second request in the gateway cache according to the second request; acquiring the configuration file according to the address of the configuration file, wherein the configuration file comprises a current limiting rule; judging whether the second request meets the current limiting rule, if so, returning the second request to the second sending end.
In an embodiment, the control unit is specifically further configured to:
if the configuration file further comprises a degradation rule, after the configuration file is obtained according to the address of the configuration file, judging whether the second request meets the degradation rule, and if yes, sending preset degradation data to the second sending end.
To achieve the above object, according to still another aspect of an embodiment of the present invention, there is provided an electronic apparatus.
An electronic device according to an embodiment of the present invention includes: one or more processors; and the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors are enabled to realize the method for processing the request provided by the embodiment of the invention.
To achieve the above object, according to still another aspect of an embodiment of the present invention, a computer-readable medium is provided.
A computer readable medium of an embodiment of the present invention has stored thereon a computer program which, when executed by a processor, implements a method for processing a request provided by the embodiment of the present invention.
One embodiment of the above invention has the following advantages or benefits: and sending the first request to the application by receiving the first request from the first sending end, acquiring data responding to the first request from the application, and storing the data in the gateway cache. Therefore, when the application is unavailable, the data stored in the gateway cache can be utilized to process part of the requests, so that the availability of the cache in the application restarting process is realized, the part of the requests can be processed when the application is unavailable, and the stability and the availability of the service are improved.
Further effects of the above-described non-conventional alternatives are described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic diagram of the main flow of a method of processing a request according to an embodiment of the invention;
FIG. 2 is an interactive schematic diagram of a method of processing a request according to another embodiment of the invention;
FIG. 3 is a schematic diagram of the main flow of a method of processing a request according to yet another embodiment of the invention;
FIG. 4 is a schematic diagram of the major modules of an apparatus for processing requests according to one embodiment of the invention;
FIG. 5 is a schematic diagram of an apparatus for processing requests according to another embodiment of the present invention;
FIG. 6 is an exemplary system architecture diagram in which embodiments of the present invention may be applied;
fig. 7 is a schematic diagram of a computer system suitable for use in implementing an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present invention are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It is noted that embodiments of the invention and features of the embodiments may be combined with each other without conflict.
In the actual development process, a large number of requests are processed by using caches, so that high availability of services is ensured, and service performance is generally improved by using multi-level caches, such as a local cache, a distributed cache, an access layer cache, a browser cache and the like. Local caching, such as JAVA applications, is typically implemented using hash tables and linked lists, such as LinkedHashMap (LinkedHashMap is a hash table and linked list of Map interfaces), with predictable iteration sequence or a subclass of HashMap (hash Map), with record insertion sequence preserved, ehcache (Ehcache is an in-process cache framework of pure JAVA, with features of fast, smart, etc.), distributed caching is typically implemented using Memcached (Memcached is a high-performance distributed memory object cache system for dynamic Web applications to reduce database load, by caching data and objects in memory to reduce the number of times the database is read, thereby improving the speed of dynamic, database driven websites), dis (dis is an open-source, using ANSI C language writing, supporting network, can also be based on memory, can also be persistent, key-ue databases, and provides API (application programming interface)) implementation of multiple languages, and access layer can also be implemented in the reverse direction of the Web server (POP/HTTP) of the access layer (nx is a high-speed server or the reverse-speed of the HTTP/s 3/HTTP.
At present, after receiving a request sent by a sending end, an application realized by a language such as Java, go, python queries in a cache of the application, and when no data is queried, the application accesses a database to obtain the data from the database; the application calculates the data, the calculated data is returned to the transmitting end, CPU resources of the application are consumed when the application calculates, therefore, the application stores the calculated data in a cache of the application, and when the request is received again, the stored calculated data is transmitted to the transmitting end, and the processes of accessing the database and calculating are omitted. However, when the application is not available, none of the application's caching, demotion, and throttling is available and the request cannot be processed at all.
After the application is restarted, the data in the cache of the application is not available, and the cache of the application needs to be reconstructed. At this time, the application receives a large amount of requests sent by the sending end, but cannot acquire data from the cache of the application, the application needs to frequently access the database, and the processing capacity of the database is weak, so that the load of the application is very high, the database is impacted, and if the restarted application is more, the service may not be available. One prior art technique is by saving cached data of an application into a database (redis). After the application is restarted, the application is available at the moment, and the request is processed by searching the redis data through the available application, so that the problem after the application is restarted is solved. However, in the application restart process, the application is not available at this time, the technology only guarantees the existence of the data, and the unavailable application cannot find the data from the redis, so that the technology cannot solve the problems that the cache is unavailable and the request cannot be processed at all in the application restart process.
To solve the problems in the prior art, an embodiment of the present invention provides a method for processing a request, as shown in fig. 1, the method includes:
step S101, a first request from a first sending end is received, and the first request from the first sending end is sent to an application.
In this step, the load balancer may be enabled to receive the first request from the first transmitting end by modifying the destination address of the load balancer. The first request from the first sender may be sent to the application according to the address of the application. At this point, the application is available, and the application processes the first request.
Step S102, data responding to the first request is obtained from the application, and the data is stored in a gateway cache.
In this step, the application receives the first request and processes the first request, and obtains data in response to the first request from a database storing data in response to the first request. Thus, data responsive to the first request may be obtained from the application. A caching policy may be preset, and data responding to the first request is saved to the gateway for caching according to the caching policy. Because the data responding to the first request is stored in the gateway cache, if the second request which is the same as the first request and sent by the second sending end is received under the condition that the application is unavailable, the data responding to the first request and obtained from the gateway cache can be directly sent to the second sending end. This is so because the second request is identical to the first request, and the data that responds to the second request is the data that responds to the first request. Therefore, the problem that the request cannot be processed completely when the application is unavailable is solved, part of the request can be processed when the application is unavailable by using the gateway cache, and the stability and the availability of the service are improved.
Step 103, obtaining the data responding to the first request from the gateway cache, and sending the data to the first sending end.
In this embodiment, it should be noted that, after the application is restarted, part of the request may be processed by using the gateway cache, so the impact on the database may be reduced in the embodiment of the present invention. In the restarting process of the application, the conversion of the completely unavailable part of the application is realized, and the experience of a user at the transmitting end is improved.
In the embodiment of the invention, the first request is sent to the application by receiving the first request from the first sending end, and the data responding to the first request is obtained from the application and stored in the gateway cache. Therefore, when the application is unavailable, the data stored in the gateway cache can be utilized to process part of the requests, so that the availability of the cache in the application restarting process is realized, the part of the requests can be processed when the application is unavailable, and the stability and the availability of the service are improved.
In order to solve the problems of the prior art, another embodiment of the present invention provides a method for processing a request. As shown in fig. 2, the method includes:
step 201, a first request from a first sending end is received, and the first request from the first sending end is sent to an application.
In this step, the transmitting end may be a user end or a service end. The method provided by the embodiment of the invention can be applied to the gateway, and the OpenResity technology is used (OpenResity is based onNginxWith the high-performance Web platform of Lua, a large number of sophisticated Lua libraries, third party modules and most of the dependencies are integrated inside. For squareConveniently building dynamic Web application, web service and dynamic gateway which can handle ultra-high concurrency and extremely high expansibility) to build the gateway, wherein the gateway is arranged at a node where the application is located.
In implementation, the destination address in the load balancer may be set as the address of the gateway, and after the first request sent by the first sending end arrives at the load balancer, the load balancer sends the first request sent by the first sending end to the gateway according to the destination address. Thus, the gateway may receive a first request from the first sender. The load balancer may be a DNS (Domain Name System ) load balancer, or an nmginx (nmginx is a high performance HTTP and reverse proxy service, also an IMAP/POP3/SMTP service) load balancer.
Step S202, data responding to the first request is obtained from the application, and the data is stored in a gateway cache.
Step S203, obtaining the data responding to the first request from the gateway cache, and sending the data to the first sending end.
Step S204, a second request from a second sending end is received.
In this step, it should be noted that the first transmitting end and the second transmitting end may be the same or different, and whether the first transmitting end and the second transmitting end are the same transmitting end does not affect implementation of the embodiment of the present invention. The first request may be the same as the second request or may be different from the first request. The first request is the same as the second request, and the data responding to the second request is the data responding to the first request, so that the data responding to the first request is sent to the second sending end; the first request is different from the second request, whether the application is available or not is judged, and if the application is available, the second request is sent to the application; and if not, sending the second request to other applications except the application. In particular implementations, the first request or the second request may be a request to load an item detail page, a request to join a shopping cart, or the like.
Step 205, query data responding to the second request in the gateway cache according to the second request.
Step S206, if the second request is the same as the first request, the data responding to the first request is sent to the second sending end.
Step S207, if the second request is not the same as the first request, judging whether the application is available.
In this step, the case where the application is not available may be an application restart process, an application downtime, or an application abnormality, or the like. In addition, an application is available, step S208 is performed; the application is not available, and step S209 is performed. In specific implementation, the method for judging whether the application is available may be: and sending a connection request to the application, wherein if the application is connected to the application according to the connection request, the application is available, and otherwise, the application is unavailable.
Step S208, sending the second request to the application, obtaining data responding to the second request from the application, storing the data in the gateway cache, obtaining the data responding to the second request from the gateway cache, and sending the data to the second sending end.
In this step, the data may be saved to the gateway cache according to a preset cache policy, which is set by the programmer using the shared dictionary. It should be noted that the application is available, the application can process the second request, and the application obtains the data in response to the second request from the database.
It should be noted that, the application may be a WEB application (a WEB application is an application that can be accessed through the WEB, and the biggest benefit of the application is that a user can easily access the application, and the user only needs to have a browser and does not need to install other software any more).
Step S209, sending the second request to other applications except the application, obtaining data responding to the second request from the other applications, storing the data in the gateway cache, obtaining the data responding to the second request from the gateway cache, and sending the data to the second sending end.
In this step, it should be noted that, when the application is deployed, a plurality of applications may be deployed on different nodes in a distributed manner, and when the application is not available, since the nodes in the distributed deployment manner do not affect each other, other applications except the application are available, other applications can process the second request, and other applications acquire data in response to the second request from the database.
In the embodiment of the invention, the second request from the second sending end is received, the data responding to the second request is queried in the gateway cache according to the second request, and when the second request is identical to the first request, the data responding to the first request is sent to the second sending end. Therefore, the work of receiving the request and inquiring the cache is transferred from the application, and when the application is unavailable, part of the request can be processed by utilizing the gateway cache, and the request can not be processed completely and is converted into part of the request which can be processed, so that the stability and the availability of the service are further improved. By determining whether an application is available, when available, the application processes the request, and when unavailable, other applications than the application process the request, fault tolerance is achieved. Whether the application is available or not, after the request is processed, the data responding to the request is stored in the gateway cache, so that the gateway cache can be utilized to enable the application to process part of the request when the application is unavailable, the availability of the cache in the restarting process of the application is realized, and the stability and the availability of the service are further improved.
In order to solve the problems of the prior art, a further embodiment of the present invention provides a method for processing a request. In an embodiment of the present invention, on the basis of the embodiment shown in fig. 1, as shown in fig. 3, the method further includes, after step 103:
step 301, a second request from a second transmitting end is received.
Step 302, receiving an address of a configuration file sent by the application.
In this step, during implementation, the configuration file is stored in the node where the application is located, the configuration file can be set by using the nginx according to the service requirement, and the application sends the address of the configuration file to the gateway, so that the gateway can receive the address of the configuration file sent by the application and obtain the configuration file. If the application is not available, the application does not affect the configuration file, which is available, and the gateway uses the configuration file to throttle and demote the request.
Step 303, obtaining the configuration file according to the address of the configuration file, wherein the configuration file comprises a current limiting rule and a degradation rule.
In this step, in implementation, the gateway obtains the configuration file according to the address of the configuration file, and loads the configuration of the configuration file to the gateway, thereby updating the thread of the gateway.
It should be noted that, the configuration file may include only the current limiting rule, only the degradation rule, or the like, and the configuration file provided in the embodiment of the present invention includes the current limiting rule and the degradation rule, which is only a specific example, and the content included in the configuration file is not limited by this example.
In addition, the gateway is located at the node where the application is located, and thus both the gateway and the application can use the configuration file to throttle and downgrade requests.
Step 304, determining whether the second request meets the current limiting rule.
In this step, the step is described as an example: the restriction rule is to restrict the 50 th and later requests received in one day, and when the second request is the 51 st request in one day, the second request meets the restriction rule, and the second request is restricted, that is, the gateway does not process the second request, and returns the second request to the second sender. The throttling rule is typically a number of received requests per unit time that is controlled, and beyond which the limit is placed. The unit time may be daily, hourly, or minute, etc.
It should be appreciated that the throttling rules may limit the amount of requests for some scenarios, which may be scarce resources (seconds kill, robbery), write services (e.g., comments, orders), frequent complex queries (last few pages of comments), etc. The gateway or application is protected by the restriction of the restriction rule, and processing requests are refused if the restriction rule is satisfied. The limiting means may be limiting the total request amount, limiting the instantaneous request amount, limiting the average rate per unit time, limiting the remote interface call rate, etc. The throttling algorithm may include token buckets, leaky buckets, counters, or the like.
In practice, if the current limit rule is satisfied, step 305 is executed; if the current limit rule is not satisfied, step 306 is performed.
And step 305, returning the second request to the second sending end.
In this step, the second request is returned to the second sender, i.e. the gateway does not process the second request, and the second request is throttled.
Step 306, determining whether the second request satisfies the degradation rule.
In this step, the step is described as a specific example: the degrading rule is degrading the request for accessing the commodity details when the inventory service is not available, and the second request is a request for accessing the commodity details, so that the second request meets the degrading rule, and a prompt full of inventory or a prompt waiting (the prompt full of inventory or the prompt waiting is preset degrading data) is sent to the second sending end.
It should be appreciated that degradation is required to ensure that core services are available when access volume increases or services become problematic (e.g., slow or non-responsive), or non-core services affect the performance of the core flow. Of course, the degradation rule may be configured in advance, and the request may be processed according to the degradation rule, or manual degradation may be employed.
In the specific implementation, if the degradation rule is satisfied, step 307 is executed; if the degradation rule is not satisfied, step 308 is performed.
Step 307, sending preset degradation data to the second sending end.
In this step, the degradation data may be default values (prompt for full stock), spam data (prompt for waiting), or cache data (stock quantity 50) when embodied.
Step 308, querying data responding to the second request in the gateway cache according to the second request.
Step 309, if the second request is the same as the first request, transmitting the data in response to the first request to the second transmitting end.
In the embodiment of the invention, the configuration file is obtained by receiving the address of the configuration file sent by the application, and the request is demoted and limited by using the current limit rule and the demotion rule in the configuration file. Therefore, when the application is unavailable, the current limiting rule and the degrading rule in the configuration file can be utilized to ensure the degrading and current limiting availability, so that the stability and the availability of the service are further improved.
The method of processing a request is described above in connection with fig. 1-3 and the apparatus for processing a request is described below in connection with fig. 4-5.
To solve the problems of the prior art, an embodiment of the present invention provides an apparatus for processing a request, as shown in fig. 4, including:
the forwarding unit 401 is configured to receive a first request from a first sending end, and send the first request from the first sending end to an application.
A saving unit 402, configured to obtain data responding to the first request from the application, and save the data to a gateway cache.
And a sending unit 403, configured to obtain the data in response to the first request from the gateway cache, and send the data to the first sending end.
It should be understood that the manner of implementing the embodiment of the present invention is the same as that of implementing the embodiment shown in fig. 1, and will not be described herein.
In order to solve the problems in the prior art, another embodiment of the present invention provides an apparatus for processing a request. In an embodiment of the present invention, on the basis of the embodiment shown in fig. 4, as shown in fig. 5, the apparatus further includes:
a receiving unit 501, configured to receive a second request from a second transmitting end after transmitting the data to the first transmitting end.
And the querying unit 502 is configured to query the gateway cache for data in response to the second request according to the second request.
A processing unit 503, configured to send the data in response to the first request to the second sending end if the second request is the same as the first request;
if the second request is not the same as the first request, judging whether the application is available or not;
if yes, the second request is sent to the application, data responding to the second request is obtained from the application and is stored in the gateway cache, the data responding to the second request is obtained from the gateway cache, and the data is sent to the second sending end;
if not, the second request is sent to other applications except the application, data responding to the second request is obtained from the other applications and is stored in the gateway cache, and the data responding to the second request is obtained from the gateway cache and is sent to the second sending end.
It should be understood that the manner of implementing the embodiment of the present invention is the same as that of implementing the embodiment shown in fig. 2, and will not be described herein.
In order to solve the problems of the prior art, a further embodiment of the present invention provides an apparatus for processing a request. In an embodiment of the present invention, on the basis of the embodiment shown in fig. 4, the implementation further includes:
And the receiving unit is used for receiving a second request from a second sending end after the data is sent to the first sending end.
The control unit is used for receiving the address of the configuration file sent by the application before inquiring the data responding to the second request in the gateway cache according to the second request; acquiring the configuration file according to the address of the configuration file, wherein the configuration file comprises a current limiting rule and a degradation rule; judging whether the second request meets the current limiting rule, if so, returning the second request to the second sending end; if not, judging whether the second request meets the degradation rule, if so, sending preset degradation data to the second sending end, and if not, calling a query unit.
And the inquiring unit is used for inquiring the data responding to the second request in the gateway cache according to the second request.
And the processing unit is used for sending the data responding to the first request to the second sending end if the second request is the same as the first request.
It should be understood that the manner of implementing the embodiment of the present invention is the same as that of implementing the embodiment shown in fig. 3, and will not be described herein.
Fig. 6 illustrates an exemplary system architecture 600 of a method of processing a request or an apparatus of processing a request to which embodiments of the invention may be applied.
As shown in fig. 6, the system architecture 600 may include terminal devices 601, 602, 603, a network 604, and a server 605. The network 604 is used as a medium to provide communication links between the terminal devices 601, 602, 603 and the server 605. The network 604 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may interact with the server 605 via the network 604 using the terminal devices 601, 602, 603 to receive or send messages, etc. Various communication client applications such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only) may be installed on the terminal devices 601, 602, 603.
The terminal devices 601, 602, 603 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 605 may be a server providing various services, such as a background management server (by way of example only) providing support for shopping-type websites browsed by users using terminal devices 601, 602, 603. The background management server may analyze and process the received data such as the product information query request, and feedback the processing result (e.g., the target push information, the product information—only an example) to the terminal device.
It should be noted that, the method for processing a request provided in the embodiment of the present invention is generally executed by the server 605, and accordingly, the device for processing a request is generally disposed in the server 605.
It should be understood that the number of terminal devices, networks and servers in fig. 6 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 7, there is illustrated a schematic diagram of a computer system 700 suitable for use in implementing an embodiment of the present invention. The terminal device shown in fig. 7 is only an example, and should not impose any limitation on the functions and the scope of use of the embodiment of the present invention.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU) 701, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the system 700 are also stored. The CPU 701, ROM 702, and RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input section 706 including a keyboard, a mouse, and the like; an output portion 707 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 708 including a hard disk or the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. The drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read therefrom is mounted into the storage section 708 as necessary.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 709, and/or installed from the removable medium 711. The above-described functions defined in the system of the present invention are performed when the computer program is executed by a Central Processing Unit (CPU) 701.
The computer readable medium shown in the present invention may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a unit, segment, or portion of code, which comprises one or more executable instructions for implementing the specified rule function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present invention may be implemented in software or in hardware. The described units may also be provided in a processor, for example, described as: a processor includes a forwarding unit, a saving unit, and a transmitting unit. The names of these units do not in some cases limit the unit itself, for example, the transmitting unit may also be described as "a unit that obtains the data in response to the first request from the gateway cache and transmits the data to the first transmitting end".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be present alone without being fitted into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to include: receiving a first request from a first transmitting end, and transmitting the first request from the first transmitting end to an application; acquiring data responding to the first request from the application, and storing the data to a gateway cache; and acquiring the data responding to the first request from the gateway cache, and transmitting the data to the first transmitting end.
According to the technical scheme of the embodiment of the invention, the first request is sent to the application by receiving the first request from the first sending end, and the data responding to the first request is obtained from the application and stored in the gateway cache. Therefore, when the application is unavailable, the data stored in the gateway cache can be utilized to process part of the requests, so that the availability of the cache in the application restarting process is realized, the part of the requests can be processed when the application is unavailable, and the stability and the availability of the service are improved.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives can occur depending upon design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method of processing a request, comprising:
receiving a first request from a first transmitting end, and transmitting the first request from the first transmitting end to an application; wherein the plurality of applications are deployed on different nodes in a distributed manner;
acquiring data responding to the first request from the application, and storing the data to a gateway cache;
acquiring the data responding to the first request from the gateway cache, and sending the data to the first sending end;
after the data is sent to the first sending end, the method further comprises:
receiving a second request from a second transmitting end;
receiving an address of a configuration file sent by the application;
acquiring the configuration file according to the address of the configuration file, wherein the configuration file comprises a current limiting rule;
Judging whether the second request meets the current limiting rule, if so, returning the second request to the second sending end;
after receiving the second request from the second transmitting end, the method further comprises:
if the second request is different from the first request, judging whether the application is available or not;
if the application is available, sending the second request to the application;
and if the application is not available, sending the second request to other applications except the application.
2. The method of claim 1, further comprising, after receiving the second request from the second sender:
inquiring data responding to the second request in the gateway cache according to the second request;
and if the second request is the same as the first request, transmitting the data responding to the first request to the second transmitting end.
3. The method of claim 1, wherein after the sending the second request to the application, further comprising:
acquiring data responding to the second request from the application, storing the data in the gateway cache, acquiring the data responding to the second request from the gateway cache, and transmitting the data to the second transmitting terminal;
After the second request is sent to the other applications except the application, the method further comprises:
and acquiring data responding to the second request from the other applications, storing the data in the gateway cache, acquiring the data responding to the second request from the gateway cache, and transmitting the data to the second transmitting terminal.
4. The method of claim 1, further comprising, after obtaining the profile according to the address of the profile, if the profile further includes a demotion rule:
and judging whether the second request meets the degradation rule or not, if so, sending preset degradation data to the second sending end.
5. An apparatus for processing a request, comprising:
the forwarding unit is used for receiving a first request from a first sending end and sending the first request from the first sending end to an application; wherein the plurality of applications are deployed on different nodes in a distributed manner;
a storage unit, configured to obtain data responding to the first request from the application, and store the data in a gateway cache;
A sending unit, configured to obtain, from the gateway cache, the data in response to the first request, and send the data to the first sending end;
a receiving unit, configured to receive a second request from a second transmitting end;
the control unit is used for receiving the address of the configuration file sent by the application; acquiring the configuration file according to the address of the configuration file, wherein the configuration file comprises a current limiting rule; judging whether the second request meets the current limiting rule, if so, returning the second request to the second sending end;
the processing unit is used for judging whether the application is available or not if the second request is different from the first request; if the application is available, sending the second request to the application; and if the application is not available, sending the second request to other applications except the application.
6. The apparatus as recited in claim 5, further comprising:
a query unit, configured to query the gateway cache for data in response to the second request according to the second request;
the processing unit is further configured to send the data in response to the first request to the second sending end if the second request is the same as the first request.
7. The apparatus of claim 6, wherein the processing unit is further specifically configured to:
if the application is available, acquiring data responding to the second request from the application, storing the data in the gateway cache, acquiring the data responding to the second request from the gateway cache, and transmitting the data to the second transmitting end;
and if the application is unavailable, acquiring data responding to the second request from the other applications, storing the data in the gateway cache, acquiring the data responding to the second request from the gateway cache, and transmitting the data to the second transmitting end.
8. The device according to claim 5, wherein the control unit is further specifically configured to:
if the configuration file further comprises a degradation rule, after the configuration file is obtained according to the address of the configuration file, judging whether the second request meets the degradation rule, and if yes, sending preset degradation data to the second sending end.
9. An electronic device, comprising:
one or more processors;
Storage means for storing one or more programs,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-4.
10. A computer readable medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-4.
CN201910197384.3A 2019-03-15 2019-03-15 Method and device for processing request Active CN111698273B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910197384.3A CN111698273B (en) 2019-03-15 2019-03-15 Method and device for processing request

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910197384.3A CN111698273B (en) 2019-03-15 2019-03-15 Method and device for processing request

Publications (2)

Publication Number Publication Date
CN111698273A CN111698273A (en) 2020-09-22
CN111698273B true CN111698273B (en) 2024-04-09

Family

ID=72475900

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910197384.3A Active CN111698273B (en) 2019-03-15 2019-03-15 Method and device for processing request

Country Status (1)

Country Link
CN (1) CN111698273B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2833272A1 (en) * 2013-07-29 2015-02-04 Amadeus S.A.S. Processing information queries in a distributed information processing environment
US10061852B1 (en) * 2015-05-19 2018-08-28 Amazon Technologies, Inc. Transparent proxy tunnel caching for database access

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130219006A1 (en) * 2012-02-21 2013-08-22 Sony Corporation Multiple media devices through a gateway server or services to access cloud computing service storage
CN107222426B (en) * 2016-03-21 2021-07-20 阿里巴巴集团控股有限公司 Flow control method, device and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2833272A1 (en) * 2013-07-29 2015-02-04 Amadeus S.A.S. Processing information queries in a distributed information processing environment
US10061852B1 (en) * 2015-05-19 2018-08-28 Amazon Technologies, Inc. Transparent proxy tunnel caching for database access

Also Published As

Publication number Publication date
CN111698273A (en) 2020-09-22

Similar Documents

Publication Publication Date Title
CN109684358B (en) Data query method and device
CN110120917B (en) Routing method and device based on content
CN107547629B (en) Method and device for downloading client static resources, electronic equipment and readable medium
CN110661826B (en) Method for processing network request by proxy server side and proxy server
CN110738436B (en) Method and device for determining available inventory
CN111427701A (en) Workflow engine system and business processing method
CN110473036B (en) Method and device for generating order number
CN110830374A (en) Method and device for gray level release based on SDK
CN112003945A (en) Service request response method and device
CN110401731B (en) Method and apparatus for distributing content distribution nodes
CN112445988A (en) Data loading method and device
CN111698273B (en) Method and device for processing request
CN110247847B (en) Method and device for back source routing between nodes
CN112711572B (en) Online capacity expansion method and device suitable for database and table division
CN112688982B (en) User request processing method and device
CN113760487B (en) Service processing method and device
CN112783914B (en) Method and device for optimizing sentences
US10298689B2 (en) Network node, electronic device and methods for benefitting from a service provided by a cloud
CN109213815B (en) Method, device, server terminal and readable medium for controlling execution times
CN110019671B (en) Method and system for processing real-time message
CN113360528A (en) Data query method and device based on multi-level cache
CN113742617A (en) Cache updating method and device
CN113760965B (en) Data query method and device
CN113127416A (en) Data query method and device
CN113220981A (en) Method and device for optimizing cache

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant