CN111698273A - Method and device for processing request - Google Patents

Method and device for processing request Download PDF

Info

Publication number
CN111698273A
CN111698273A CN201910197384.3A CN201910197384A CN111698273A CN 111698273 A CN111698273 A CN 111698273A CN 201910197384 A CN201910197384 A CN 201910197384A CN 111698273 A CN111698273 A CN 111698273A
Authority
CN
China
Prior art keywords
request
data
application
sending
sending end
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910197384.3A
Other languages
Chinese (zh)
Other versions
CN111698273B (en
Inventor
张开涛
王杰颖
邹子靖
林本兴
田子玉
纪鸿焘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201910197384.3A priority Critical patent/CN111698273B/en
Publication of CN111698273A publication Critical patent/CN111698273A/en
Application granted granted Critical
Publication of CN111698273B publication Critical patent/CN111698273B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/30Profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/566Grouping or aggregating service requests, e.g. for unified processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a method and a device for processing a request, and relates to the technical field of computers. The method, one embodiment, includes: receiving a first request from a first sending end, and sending the first request from the first sending end to an application; acquiring data responding to the first request from the application, and storing the data to a gateway cache; and acquiring the data responding to the first request from the gateway cache, and sending the data to the first sending end. The embodiment ensures that the application can process part of requests when the application is unavailable, and improves the stability and the availability of the service.

Description

Method and device for processing request
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for processing a request.
Background
At present, in the restarting process of the application, the application is unavailable, the cache of the application is unavailable, and the request cannot be processed at all. In the prior art, cached data of an application is saved into a database (redis). After the application is restarted, the application is available at the moment, and the request is processed in a mode that the available application searches data of redis, so that the technology solves the problems that after the application is restarted, the application load is high and impact is caused to a database.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art:
however, in the application restart process, the application is unavailable at this time, the technology only ensures the existence of the data, and the unavailable application cannot find the data from the redis, so that the technology cannot solve the problems that the cache of the application is unavailable and the request cannot be processed at all in the application restart process.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for processing a request, which can ensure that a part of requests can be processed when an application is unavailable, and improve stability and availability of a service.
To achieve the above object, according to an aspect of an embodiment of the present invention, there is provided a method of processing a request.
The method for processing the request of the embodiment of the invention comprises the following steps:
receiving a first request from a first sending end, and sending the first request from the first sending end to an application;
acquiring data responding to the first request from the application, and storing the data to a gateway cache;
and acquiring the data responding to the first request from the gateway cache, and sending the data to the first sending end.
In one embodiment, after sending the data to the first sender, the method further includes:
receiving a second request from a second transmitting end;
inquiring data responding to the second request in the gateway cache according to the second request;
and if the second request is the same as the first request, sending the data responding to the first request to the second sending end.
In one embodiment, after querying the gateway cache for data responding to the second request according to the second request, the method further includes:
if the second request is different from the first request, judging whether the application is available;
if so, sending the second request to the application, acquiring data responding to the second request from the application, storing the data in the gateway cache, acquiring the data responding to the second request from the gateway cache, and sending the data to the second sending end;
if not, sending the second request to other applications except the application, acquiring data responding to the second request from the other applications, storing the data in the gateway cache, acquiring the data responding to the second request from the gateway cache, and sending the data to the second sending end.
In one embodiment, before querying the gateway cache for data responding to the second request according to the second request, the method further includes:
receiving an address of a configuration file sent by the application;
acquiring the configuration file according to the address of the configuration file, wherein the configuration file comprises a current limiting rule;
and judging whether the second request meets the current limiting rule, if so, returning the second request to the second sending end.
In one embodiment, if the configuration file further includes a downgrading rule, after obtaining the configuration file according to the address of the configuration file, the method further includes:
and judging whether the second request meets the degradation rule, if so, sending preset degradation data to the second sending end.
To achieve the above object, according to another aspect of the embodiments of the present invention, there is provided an apparatus for processing a request.
The device for processing the request of the embodiment of the invention comprises:
the forwarding unit is used for receiving a first request from a first sending end and sending the first request from the first sending end to an application;
the storage unit is used for acquiring data responding to the first request from the application and storing the data to a gateway cache;
and the sending unit is used for acquiring the data responding to the first request from the gateway cache and sending the data to the first sending end.
In one embodiment, further comprising:
a receiving unit, configured to receive a second request from a second transmitting end after transmitting the data to the first transmitting end;
the query unit is used for querying data responding to the second request in the gateway cache according to the second request;
and the processing unit is configured to send the data responding to the first request to the second sending end if the second request is the same as the first request.
In one embodiment, the processing unit is specifically further configured to:
if the second request is different from the first request, judging whether the application is available;
if so, sending the second request to the application, acquiring data responding to the second request from the application, storing the data in the gateway cache, acquiring the data responding to the second request from the gateway cache, and sending the data to the second sending end;
if not, sending the second request to other applications except the application, acquiring data responding to the second request from the other applications, storing the data in the gateway cache, acquiring the data responding to the second request from the gateway cache, and sending the data to the second sending end.
In one embodiment, further comprising:
the control unit is used for receiving the address of the configuration file sent by the application before inquiring the data responding to the second request in the gateway cache according to the second request; acquiring the configuration file according to the address of the configuration file, wherein the configuration file comprises a current limiting rule; and judging whether the second request meets the current limiting rule, if so, returning the second request to the second sending end.
In one embodiment, the control unit is specifically further configured to:
if the configuration file further comprises a degradation rule, after the configuration file is obtained according to the address of the configuration file, whether the second request meets the degradation rule is judged, and if yes, preset degradation data is sent to the second sending end.
To achieve the above object, according to still another aspect of an embodiment of the present invention, there is provided an electronic apparatus.
An electronic device of an embodiment of the present invention includes: one or more processors; the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors are enabled to realize the method for processing the request provided by the embodiment of the invention.
To achieve the above object, according to still another aspect of an embodiment of the present invention, there is provided a computer-readable medium.
A computer-readable medium of an embodiment of the present invention stores thereon a computer program, which when executed by a processor implements the method for processing a request provided by an embodiment of the present invention.
One embodiment of the above invention has the following advantages or benefits: the first request is sent to the application by receiving the first request from the first sending end, and the data responding to the first request is obtained from the application and stored in the gateway cache. Therefore, when the application is unavailable, the data stored in the gateway cache can be used for processing part of the requests, the availability of the cache in the restarting process of the application is realized, the processing of part of the requests when the application is unavailable is ensured, and the stability and the availability of the service are improved.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic diagram of a main flow of a method of processing a request according to an embodiment of the invention;
FIG. 2 is an interaction diagram of a method of processing a request according to another embodiment of the invention;
FIG. 3 is a schematic diagram of a main flow of a method of processing a request according to yet another embodiment of the invention;
FIG. 4 is a schematic diagram of the main modules of an apparatus for processing requests according to one embodiment of the present invention;
FIG. 5 is a schematic diagram of an apparatus for processing a request according to another embodiment of the present invention;
FIG. 6 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
fig. 7 is a schematic block diagram of a computer system suitable for use in implementing a terminal device or server of an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
In the actual development process, a cache is used for solving the processing of a large number of requests to ensure high availability of services, and multiple levels of caches are generally used for improving service performance, such as a local cache, a distributed cache, an access layer cache, a browser cache and the like. Local caching, such as JAVA applications, is typically implemented using, for example, LinkedHashMap (LinkedHashMap is a hash table and a link list of a Map interface, has a predictable iteration order or a subclass of HashMap (hash mapping), stores a recorded insertion order), Ehcache (Ehcache is a JAVA-only in-process cache frame with features of speed, precision, etc.), distributed caching is typically implemented using Memcached (Memcached is a high-performance distributed memory object cache system for dynamic Web applications to reduce database load, which reduces the number of times a database is read by caching data and objects in memory, thereby increasing the speed of dynamic, database-driven websites), Redis (Redis an open source written in ANSI C language, supporting networks, memory-based, persistent journaling, Key-Value databases, and provides APIs (application programming interfaces) in multiple languages, the access stratum cache may be implemented at the Nginx (Nginx is a high performance HTTP and reverse proxy server, and is also an IMAP/POP3/SMTP server) level.
At present, after receiving a request sent by a sending end, an application implemented by languages such as Java, Go, Python and the like queries in a cache of the application, and when data is not queried, the application accesses a database to obtain the data from the database; the application calculates the data, returns the calculated data to the sending end, and consumes the CPU resource of the application when the application performs calculation, so that the application stores the calculated data in the cache of the application, and only needs to send the stored calculated data to the sending end when receiving the request again, thereby saving the processes of accessing the database and calculating. However, when an application is not available, neither caching, downgrading, nor throttling of the application is available, and the request cannot be handled at all.
After the application is restarted, the data in the application cache is not available, and the application cache needs to be reconstructed. At this time, the application receives a large number of requests sent by the sending end, but cannot acquire data from the cache of the application, the application needs to frequently access the database, and the processing capability of the database is weak, so that the load of the application is very high, the database is impacted, and if the application is restarted, the service may not be available. One prior art technique is to store the cached data of an application into a database (redis). After the application is restarted, the application is available at the moment, and the request is processed in a mode that the available application searches data of the redis, so that the problem after the application is restarted is solved. However, in the application restart process, the application is unavailable at this time, the technology only ensures the existence of the data, and the unavailable application cannot find the data from the redis, so that the technology cannot solve the problems that the cache is unavailable and the request cannot be processed at all in the application restart process.
In order to solve the problems in the prior art, an embodiment of the present invention provides a method for processing a request, as shown in fig. 1, the method includes:
step S101, receiving a first request from a first sending end, and sending the first request from the first sending end to an application.
In this step, the destination address of the load balancer can be modified to receive the first request from the first transmitting end. The first request from the first sender may be sent to the application according to an address of the application. At this point, the application is available, and the application processes the first request.
Step S102, obtaining data responding to the first request from the application, and storing the data to a gateway cache.
In this step, the application receives the first request, processes the first request, and obtains data responding to the first request from a database storing data responding to the first request. Thus, data responsive to the first request may be obtained from the application. A cache policy may be preset, and the data responding to the first request may be stored in the gateway cache according to the cache policy. Since the data responding to the first request is stored in the gateway cache, when the application is unavailable, if a second request which is the same as the first request and is sent by the second sending end is received, the data which is obtained from the gateway cache and responds to the first request can be directly sent to the second sending end. This is so because the second request is the same as the first request, and the data responsive to the second request is the data responsive to the first request. Therefore, the problem that the application request cannot be processed at all when the application is unavailable is solved, partial requests can be processed when the application is unavailable by utilizing the gateway cache, and the stability and the usability of the service are improved.
Step S103, obtaining the data responding to the first request from the gateway cache, and sending the data to the first sending end.
In this embodiment, it should be noted that after the application is restarted, part of the requests may be processed by using the gateway cache, and thus, the embodiment of the present invention may also reduce the impact on the database. In the application restarting process, the conversion that the application is completely unavailable and part of the application is available is realized, and the experience degree of a user at the sending end is improved.
In the embodiment of the invention, the first request is sent to the application by receiving the first request from the first sending end, and the data responding to the first request is obtained from the application and is stored in the gateway cache. Therefore, when the application is unavailable, the data stored in the gateway cache can be used for processing part of the requests, the availability of the cache in the restarting process of the application is realized, the processing of part of the requests when the application is unavailable is ensured, and the stability and the availability of the service are improved.
To solve the problems in the prior art, another embodiment of the present invention provides a method for processing a request. As shown in fig. 2, the method includes:
step S201, receiving a first request from a first sending end, and sending the first request from the first sending end to an application.
In this step, the sending end may be a user end or a server end. The method provided by the embodiment of the invention can be applied to a gateway and uses OpenResty technology (OpenResty is a technology based on OpenResty)NginxAnd the high-performance Web platform of Lua integrates a large amount of fine Lua libraries, third-party modules and most of dependence items. For conveniently building dynamic Web applications, Web services and dynamic gateways that can handle ultra-high concurrency and extremely high extensibility) to construct gateways, which are arranged at nodes where applications are located.
In specific implementation, a destination address in the load balancer may be set as an address of the gateway, and after a first request sent by the first sending end reaches the load balancer, the load balancer sends the first request sent by the first sending end to the gateway according to the destination address. Thus, the gateway may receive a first request from a first sender. The load balancer may be a DNS (Domain Name System) load balancer, or a nginn (nginn is a high-performance HTTP and reverse proxy service, and is also an IMAP/POP3/SMTP service) load balancer.
Step S202, obtaining data responding to the first request from the application, and storing the data to a gateway cache.
Step S203, obtaining the data responding to the first request from the gateway cache, and sending the data to the first sending end.
Step S204, a second request from a second sending end is received.
In this step, it should be noted that the first sending end and the second sending end may be the same or different, and whether the first sending end and the second sending end are the same sending end does not affect the implementation of the embodiment of the present invention. The first request and the second request may or may not be the same. The first request is the same as the second request, and the data responding to the second request is the data responding to the first request, so that the data responding to the first request is sent to the second sending end; the first request is different from the second request, whether the application is available is judged, and if the application is available, the second request is sent to the application; and if the second request is not available, sending the second request to other applications except the application. In particular implementations, the first request or the second request may be a request to load a detailed page of the item, a request to join a shopping cart, or the like.
Step S205, querying data responding to the second request in the gateway cache according to the second request.
Step S206, if the second request is the same as the first request, sending the data responding to the first request to the second sending end.
Step S207, if the second request is different from the first request, determining whether the application is available.
In this step, the case that the application is unavailable may be an application restart process, an application down or an application exception, etc. In addition, the application is available, and step S208 is performed; the application is not available and step S209 is performed. In specific implementation, the method for determining whether the application is available may be: and sending a connection request to the application, wherein if the connection request is connected to the application, the application is available, otherwise, the application is unavailable.
Step S208, sending the second request to the application, acquiring data responding to the second request from the application, and storing the data in the gateway cache, and acquiring the data responding to the second request from the gateway cache, and sending the data to the second sending end.
In this step, the data may be saved to the gateway cache according to a preset cache policy, which is set by the programmer using the shared dictionary. It should be noted that the application is available, the application can process the second request, and the application obtains the data responding to the second request from the database.
It should be noted that the application may be a WEB application (a WEB application is an application that can be accessed through the WEB, and the greatest advantage of the application is that a user can easily access the application, and the user only needs to have a browser and does not need to install other software).
Step S209, sending the second request to other applications except the application, acquiring data responding to the second request from the other applications, and storing the data in the gateway cache, and acquiring the data responding to the second request from the gateway cache and sending the data to the second sending end.
In this step, it should be noted that, during deployment, a plurality of applications may be deployed on different nodes in a distributed manner, and when an application is unavailable, because nodes in the distributed deployment manner do not affect each other, other applications except the application are available, and the other applications can process the second request, and acquire data responding to the second request from the database.
In the embodiment of the present invention, by receiving a second request from a second sending end, data responding to the second request is queried in a gateway cache according to the second request, and the second request is the same as the first request, and the data responding to the first request is sent to the second sending end. Therefore, the work of receiving the request and inquiring the cache is transferred from the application, and then when the application is unavailable, partial request can be processed by utilizing the gateway cache, the request can not be processed completely and is converted into partial request which can be processed, and the stability and the availability of the service are further improved. Fault tolerance is achieved by determining whether an application is available, processing requests by the application when available, and processing requests by applications other than the application when unavailable. Whether the application is available or not, after the request is processed, the data responding to the request is stored in the gateway cache, so that the gateway cache can be used for processing part of the request when the application is unavailable, the availability of the cache in the restarting process of the application is realized, and the stability and the availability of the service are further improved.
To solve the problems in the prior art, a further embodiment of the present invention provides a method for processing a request. In the embodiment of the present invention, on the basis of the embodiment shown in fig. 1, as shown in fig. 3, the method further includes, after step 103:
step 301 receives a second request from a second sender.
Step 302, receiving an address of a configuration file sent by the application.
In this step, in specific implementation, the configuration file is stored in the node where the application is located, the configuration file may be set by using nginx according to a service requirement, and the application sends an address of the configuration file to the gateway, so that the gateway may receive the address of the configuration file sent by the application and obtain the configuration file. If the application is not available, the application does not affect the configuration file, the configuration file is available, and the gateway uses the configuration file to throttle and downgrade the request.
Step 303, obtaining the configuration file according to the address of the configuration file, where the configuration file includes a current limiting rule and a degrading rule.
In this step, during implementation, the gateway obtains the configuration file according to the address of the configuration file, and loads the configuration of the configuration file to the gateway, thereby updating the thread of the gateway.
It should be noted that the configuration file may only include the current limiting rule, or only include the downgrading rule, and the configuration file provided in the embodiment of the present invention includes the current limiting rule and the downgrading rule, but this is only a specific example, and the content included in the configuration file is not limited in this example.
In addition, the gateway is located at the node where the application is located, and thus, both the gateway and the application can use the configuration file to throttle and downgrade the request.
And step 304, judging whether the second request meets the current limiting rule.
In this step, the following steps are described as an example: the current limiting rule limits the 50 th and later requests received in one day, and when the second request is the 51 st request in the day, the second request meets the current limiting rule, so that the second request is limited, namely the gateway does not process the second request and returns the second request to the second sending end. The current limit rule generally controls the number of received requests per unit time, beyond which the limit is imposed. The unit time may be daily, hourly, or per minute, etc.
It should be understood that the current limit rules may limit the amount of requests for some scenarios, which may be scarce resources (second kill, rush purchase), write services (e.g., comments, order placement), frequent complex queries (last few pages of comments), etc. The gateway or application is protected by the restrictions of the current limit rules, and processing requests are denied if the restriction rules are satisfied. The limiting means may be to limit the total amount of requests, limit the amount of instantaneous requests, limit the average rate per unit time or limit the remote interface call rate, etc. The current limiting algorithm may include a token bucket, a leaky bucket, or a counter, among others.
In specific implementation, if the current limiting rule is satisfied, step 305 is executed; if the current limit rule is not satisfied, go to step 306.
Step 305, returning the second request to the second sending end.
In this step, the second request is returned to the second sending end, that is, the gateway does not process the second request, and the second request is limited.
Step 306, judging whether the second request meets the degradation rule.
In this step, the step is described as a specific example: the downgrading rule is to downgrade the request to access the details of the good when inventory services are not available, and the second request is to access the details of the good, so that the second request satisfies the downgrading rule and sends a prompt of full inventory or a prompt of waiting (the prompt of full inventory or the prompt of waiting is preset downgrading data) to the second sending end.
It should be appreciated that when the amount of access is dramatically increased or a problem arises with a service (e.g., slow response time or no response), or a non-core service affects the performance of the core flow, a downgrade is required to ensure that the core service is available. Of course, the downgrading rules may be configured in advance, and the request may be processed according to the downgrading rules, or manual downgrading may be employed.
In specific implementation, if the degradation rule is satisfied, go to step 307; if the downgrade rule is not satisfied, then step 308 is performed.
Step 307, sending the preset degradation data to the second sending end.
In this step, the destage data may be a default value (prompt of full stock), bottom data (prompt of waiting), or cache data (stock amount is 50) when implemented.
And step 308, inquiring data responding to the second request in the gateway cache according to the second request.
Step 309, if the second request is the same as the first request, sending the data responding to the first request to the second sending end.
In the embodiment of the invention, the configuration file is obtained by receiving the address of the configuration file sent by the application, and the request is degraded and limited by using the current limiting rule and the degrading rule in the configuration file. Therefore, when the application is unavailable, the availability of degradation and current limitation can be ensured by using the current limitation rule and the degradation rule in the configuration file, and the stability and the availability of the service are further improved.
The method of processing a request is described above in connection with fig. 1-3, and the apparatus for processing a request is described below in connection with fig. 4-5.
In order to solve the problems in the prior art, an embodiment of the present invention provides an apparatus for processing a request, as shown in fig. 4, the apparatus including:
the forwarding unit 401 is configured to receive a first request from a first sending end, and send the first request from the first sending end to an application.
A saving unit 402, configured to obtain data responding to the first request from the application, and save the data to a gateway cache.
A sending unit 403, configured to obtain the data responding to the first request from the gateway cache, and send the data to the first sending end.
It should be understood that the manner of implementing the embodiment of the present invention is the same as that of implementing the embodiment shown in fig. 1, and thus, the detailed description thereof is omitted.
In order to solve the problems in the prior art, another embodiment of the present invention provides an apparatus for processing a request. In the embodiment of the present invention, on the basis of the embodiment shown in fig. 4, as shown in fig. 5, the apparatus further includes:
a receiving unit 501, configured to receive a second request from a second sending end after sending the data to the first sending end.
A querying unit 502, configured to query, according to the second request, data in response to the second request in the gateway cache.
A processing unit 503, configured to send the data responding to the first request to the second sending end if the second request is the same as the first request;
if the second request is different from the first request, judging whether the application is available;
if so, sending the second request to the application, acquiring data responding to the second request from the application, storing the data in the gateway cache, acquiring the data responding to the second request from the gateway cache, and sending the data to the second sending end;
if not, sending the second request to other applications except the application, acquiring data responding to the second request from the other applications, storing the data in the gateway cache, acquiring the data responding to the second request from the gateway cache, and sending the data to the second sending end.
It should be understood that the manner of implementing the embodiment of the present invention is the same as the manner of implementing the embodiment shown in fig. 2, and the description thereof is omitted.
To solve the problems in the prior art, another embodiment of the present invention provides an apparatus for processing a request. In the embodiment of the present invention, on the basis of the embodiment shown in fig. 4, in this implementation, the method further includes:
a receiving unit, configured to receive a second request from a second sending end after sending the data to the first sending end.
The control unit is used for receiving the address of the configuration file sent by the application before inquiring the data responding to the second request in the gateway cache according to the second request; acquiring the configuration file according to the address of the configuration file, wherein the configuration file comprises a current limiting rule and a degrading rule; judging whether the second request meets the current limiting rule, if so, returning the second request to the second sending end; if not, judging whether the second request meets the degradation rule, if so, sending preset degradation data to the second sending end, and otherwise, calling a query unit.
And the query unit is used for querying data responding to the second request in the gateway cache according to the second request.
And the processing unit is configured to send the data responding to the first request to the second sending end if the second request is the same as the first request.
It should be understood that the manner of implementing the embodiment of the present invention is the same as the manner of implementing the embodiment shown in fig. 3, and the description thereof is omitted.
Fig. 6 illustrates an exemplary system architecture 600 to which the method of processing a request or the apparatus of processing a request of an embodiment of the present invention may be applied.
As shown in fig. 6, the system architecture 600 may include terminal devices 601, 602, 603, a network 604, and a server 605. The network 604 serves to provide a medium for communication links between the terminal devices 601, 602, 603 and the server 605. Network 604 may include various types of connections, such as wire, wireless communication links, or fiber optic cables, to name a few.
A user may use the terminal devices 601, 602, 603 to interact with the server 605 via the network 604 to receive or send messages or the like. The terminal devices 601, 602, 603 may have installed thereon various communication client applications, such as shopping applications, web browser applications, search applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).
The terminal devices 601, 602, 603 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 605 may be a server providing various services, such as a background management server (for example only) providing support for shopping websites browsed by users using the terminal devices 601, 602, 603. The backend management server may analyze and perform other processing on the received data such as the product information query request, and feed back a processing result (for example, target push information, product information — just an example) to the terminal device.
It should be noted that the method for processing the request provided by the embodiment of the present invention is generally executed by the server 605, and accordingly, the apparatus for processing the request is generally disposed in the server 605.
It should be understood that the number of terminal devices, networks, and servers in fig. 6 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 7, shown is a block diagram of a computer system 700 suitable for use with a terminal device implementing an embodiment of the present invention. The terminal device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU)701, which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the system 700 are also stored. The CPU 701, the ROM 702, and the RAM 703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program performs the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 701.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a unit, segment, or portion of code, which comprises one or more executable instructions for implementing the specified rule function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present invention may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a forwarding unit, a saving unit, and a transmitting unit. The names of these units do not form a limitation to the unit itself in some cases, for example, a sending unit may also be described as a "unit that obtains the data responding to the first request from the gateway cache and sends the data to the first sending end".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise: receiving a first request from a first sending end, and sending the first request from the first sending end to an application; acquiring data responding to the first request from the application, and storing the data to a gateway cache; and acquiring the data responding to the first request from the gateway cache, and sending the data to the first sending end.
According to the technical scheme of the embodiment of the invention, the first request from the first sending end is received, the first request is sent to the application, the data responding to the first request is obtained from the application, and the data is stored in the gateway cache. Therefore, when the application is unavailable, the data stored in the gateway cache can be used for processing part of the requests, the availability of the cache in the restarting process of the application is realized, the processing of part of the requests when the application is unavailable is ensured, and the stability and the availability of the service are improved.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (12)

1. A method of processing a request, comprising:
receiving a first request from a first sending end, and sending the first request from the first sending end to an application;
acquiring data responding to the first request from the application, and storing the data to a gateway cache;
and acquiring the data responding to the first request from the gateway cache, and sending the data to the first sending end.
2. The method of claim 1, further comprising, after transmitting the data to the first transmitting end:
receiving a second request from a second transmitting end;
inquiring data responding to the second request in the gateway cache according to the second request;
and if the second request is the same as the first request, sending the data responding to the first request to the second sending end.
3. The method of claim 2, further comprising, after querying the gateway cache for data responsive to the second request in accordance with the second request:
if the second request is different from the first request, judging whether the application is available;
if so, sending the second request to the application, acquiring data responding to the second request from the application, storing the data in the gateway cache, acquiring the data responding to the second request from the gateway cache, and sending the data to the second sending end;
if not, sending the second request to other applications except the application, acquiring data responding to the second request from the other applications, storing the data in the gateway cache, acquiring the data responding to the second request from the gateway cache, and sending the data to the second sending end.
4. The method of claim 2, further comprising, prior to querying the gateway cache for data responsive to the second request in accordance with the second request:
receiving an address of a configuration file sent by the application;
acquiring the configuration file according to the address of the configuration file, wherein the configuration file comprises a current limiting rule;
and judging whether the second request meets the current limiting rule, if so, returning the second request to the second sending end.
5. The method of claim 4, wherein if the configuration file further comprises a downgrading rule, after obtaining the configuration file according to the address of the configuration file, further comprising:
and judging whether the second request meets the degradation rule, if so, sending preset degradation data to the second sending end.
6. An apparatus for processing a request, comprising:
the forwarding unit is used for receiving a first request from a first sending end and sending the first request from the first sending end to an application;
the storage unit is used for acquiring data responding to the first request from the application and storing the data to a gateway cache;
and the sending unit is used for acquiring the data responding to the first request from the gateway cache and sending the data to the first sending end.
7. The apparatus of claim 6, further comprising:
a receiving unit, configured to receive a second request from a second transmitting end after transmitting the data to the first transmitting end;
the query unit is used for querying data responding to the second request in the gateway cache according to the second request;
and the processing unit is configured to send the data responding to the first request to the second sending end if the second request is the same as the first request.
8. The apparatus according to claim 7, wherein the processing unit is further specifically configured to:
if the second request is different from the first request, judging whether the application is available;
if so, sending the second request to the application, acquiring data responding to the second request from the application, storing the data in the gateway cache, acquiring the data responding to the second request from the gateway cache, and sending the data to the second sending end;
if not, sending the second request to other applications except the application, acquiring data responding to the second request from the other applications, storing the data in the gateway cache, acquiring the data responding to the second request from the gateway cache, and sending the data to the second sending end.
9. The apparatus of claim 7, further comprising:
the control unit is used for receiving the address of the configuration file sent by the application before inquiring the data responding to the second request in the gateway cache according to the second request; acquiring the configuration file according to the address of the configuration file, wherein the configuration file comprises a current limiting rule; and judging whether the second request meets the current limiting rule, if so, returning the second request to the second sending end.
10. The apparatus according to claim 9, wherein the control unit is further configured to:
if the configuration file further comprises a degradation rule, after the configuration file is obtained according to the address of the configuration file, whether the second request meets the degradation rule is judged, and if yes, preset degradation data is sent to the second sending end.
11. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
12. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN201910197384.3A 2019-03-15 2019-03-15 Method and device for processing request Active CN111698273B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910197384.3A CN111698273B (en) 2019-03-15 2019-03-15 Method and device for processing request

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910197384.3A CN111698273B (en) 2019-03-15 2019-03-15 Method and device for processing request

Publications (2)

Publication Number Publication Date
CN111698273A true CN111698273A (en) 2020-09-22
CN111698273B CN111698273B (en) 2024-04-09

Family

ID=72475900

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910197384.3A Active CN111698273B (en) 2019-03-15 2019-03-15 Method and device for processing request

Country Status (1)

Country Link
CN (1) CN111698273B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130219006A1 (en) * 2012-02-21 2013-08-22 Sony Corporation Multiple media devices through a gateway server or services to access cloud computing service storage
EP2833272A1 (en) * 2013-07-29 2015-02-04 Amadeus S.A.S. Processing information queries in a distributed information processing environment
US20170272371A1 (en) * 2016-03-21 2017-09-21 Alibaba Group Holding Limited Flow control in connection with an access request
US10061852B1 (en) * 2015-05-19 2018-08-28 Amazon Technologies, Inc. Transparent proxy tunnel caching for database access

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130219006A1 (en) * 2012-02-21 2013-08-22 Sony Corporation Multiple media devices through a gateway server or services to access cloud computing service storage
EP2833272A1 (en) * 2013-07-29 2015-02-04 Amadeus S.A.S. Processing information queries in a distributed information processing environment
US10061852B1 (en) * 2015-05-19 2018-08-28 Amazon Technologies, Inc. Transparent proxy tunnel caching for database access
US20170272371A1 (en) * 2016-03-21 2017-09-21 Alibaba Group Holding Limited Flow control in connection with an access request

Also Published As

Publication number Publication date
CN111698273B (en) 2024-04-09

Similar Documents

Publication Publication Date Title
CN110545246B (en) Token bucket-based current limiting method, device and computer readable medium
CN109684358B (en) Data query method and device
EP3605352B1 (en) Short-link parsing method, apparatus and device
US8880634B2 (en) Cache sharing among branch proxy servers via a master proxy server at a data center
CN110765036B (en) Method and device for managing metadata at a control device
CN111045833A (en) Interface calling method and device
CN112003945A (en) Service request response method and device
CN110764796A (en) Method and device for updating cache
CN110648216A (en) Wind control method and device
CN112445988A (en) Data loading method and device
CN112631504A (en) Method and device for realizing local cache by using off-heap memory
US12001458B2 (en) Multi-cloud object store access
CN111698273B (en) Method and device for processing request
CN112784139B (en) Query method, device, electronic equipment and computer readable medium
CN112783914B (en) Method and device for optimizing sentences
CN114461950A (en) Global caching method and device, electronic equipment and storage medium
CN113360528A (en) Data query method and device based on multi-level cache
US10298689B2 (en) Network node, electronic device and methods for benefitting from a service provided by a cloud
CN110019671B (en) Method and system for processing real-time message
CN113742617A (en) Cache updating method and device
CN113220981A (en) Method and device for optimizing cache
CN109213815B (en) Method, device, server terminal and readable medium for controlling execution times
CN113127416A (en) Data query method and device
CN113760965B (en) Data query method and device
CN113722193A (en) Method and device for detecting page abnormity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant