CN113973135A - Data caching processing method and device, caching grid platform and storage medium - Google Patents

Data caching processing method and device, caching grid platform and storage medium Download PDF

Info

Publication number
CN113973135A
CN113973135A CN202111217735.6A CN202111217735A CN113973135A CN 113973135 A CN113973135 A CN 113973135A CN 202111217735 A CN202111217735 A CN 202111217735A CN 113973135 A CN113973135 A CN 113973135A
Authority
CN
China
Prior art keywords
cache
target
service
request
agent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111217735.6A
Other languages
Chinese (zh)
Inventor
桂斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN202111217735.6A priority Critical patent/CN113973135A/en
Publication of CN113973135A publication Critical patent/CN113973135A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application provides a data cache processing method, a data cache processing device, a cache grid platform and a storage medium, wherein the data cache processing method comprises the following steps: receiving service request information sent by at least one second cache agent requesting service through a first cache agent of a target service, wherein the service request information comprises a uniform resource locator; and determining a target request interface corresponding to the uniform resource locator, and if a target cache processing process configured by the target request interface is in an enabled state, calling the target cache processing process through the first cache agent to process the request data in the cache database requested by the service request information.

Description

Data caching processing method and device, caching grid platform and storage medium
Technical Field
The present application relates to the field of data processing, and in particular, to a data cache processing method, a data cache processing apparatus, a cache grid platform, and a storage medium.
Background
With the rapid development of internet technology, a Service Mesh (Service Mesh) is a dedicated infrastructure layer for processing Service-to-Service intercommunication, which is responsible for securely and reliably delivering requests through a complex Service topology including modern cloud-native applications. Service grid technologies are becoming increasingly popular as the underlying technology and architectural model for micro-services and cloud-native architectures. The services grid is primarily a network infrastructure component that allows enterprises to offload network communication logic from microservice-based applications so that the enterprise can fully focus on the business logic of the service; meanwhile, in order to improve the service access speed, the enterprise provides a service caching function for each service.
In the related technology, in the service development process, the service cache logic and the service logic are coupled in the service code, so that the service cache logic is realized and bound with the service, and further, the service cache function of the service is realized. However, this method has at least problems of complexity of code development and an increase in maintenance workload.
Disclosure of Invention
Embodiments of the present application are intended to provide a data cache processing method, a data cache processing apparatus, a cache grid platform, and a storage medium, so as to solve the problems of at least complicated code development and increased maintenance workload in the related art.
The technical scheme of the embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides a data caching method, where the method includes:
receiving service request information sent by at least one second cache agent requesting service through a first cache agent of a target service, wherein the service request information comprises a uniform resource locator;
and determining a target request interface corresponding to the uniform resource locator, and if a target cache processing process configured by the target request interface is in a starting state, calling the target cache processing process to process the request data in the cache database requested by the service request information.
In a second aspect, an embodiment of the present application provides a data cache processing apparatus, where the apparatus includes:
the receiving module is used for receiving service request information sent by at least one second cache agent requesting service through a first cache agent of a target service, wherein the service request information comprises a uniform resource locator;
the determining module is used for determining a target request interface corresponding to the uniform resource locator;
and the processing module is used for calling the target cache processing process and processing the request data in the cache database requested by the service request information if the target cache processing process configured by the target request interface is in the starting state.
In a third aspect, an embodiment of the present application provides a cache grid platform, where the cache grid platform includes: the system comprises a cache management center, a cache grid and a cache database; the cache management center comprises an information management module and an information synchronization module;
the information management module is used for configuring interface configuration information for a request interface and configuring cache configuration information for a cache processing process corresponding to the request interface;
the information synchronization module is used for issuing and synchronizing the interface configuration information and the cache configuration information to the cache agents of each service included in the cache grid;
the cache grid comprises a plurality of services and a cache agent corresponding to each service in the plurality of services, wherein the plurality of cache agents comprise a first cache agent and a second cache agent;
the first caching agent is used for receiving service request information sent by at least one second caching agent requesting service, wherein the service request information comprises a uniform resource locator;
determining a target request interface corresponding to the uniform resource locator, if a target cache processing process configured by the target request interface is in a starting state, calling the target cache processing process through the first cache agent, and processing request data in a cache database requested by the service request information;
the cache database is used for storing the request data requested by the service request information.
In a fourth aspect, an embodiment of the present application provides a storage medium, where one or more programs are stored, and the one or more programs are executable by one or more processors to implement the steps of the data caching processing method described above.
The application of the embodiment of the application realizes the following beneficial effects: the method and the device realize the acceleration of the access speed of the data through the cache, so as to reduce the load pressure of the back-end application and the data storage, improve the performance of the system, improve the reading degree of the data, reduce the access pressure of the storage, and simultaneously reduce the workload of code development and the workload of maintaining a system platform.
Because the service request information sent by at least one second caching agent requesting the service is received by the first caching agent of the target service, wherein the service request information comprises a uniform resource locator; determining a target request interface corresponding to the uniform resource locator, if a target cache processing process configured by the target request interface is in an enabled state, calling the target cache processing process through a first cache agent, and processing request data in a cache database requested by service request information; therefore, the target cache processing process is configured on the target request interface and is in a starting state, the target cache processing process is called through the first cache agent, the request data in the cache database requested by the service request information is processed, the data access speed is accelerated through the cache, the load pressure of rear-end application and data storage is reduced, the performance of the system is improved, the data reading degree is improved, the stored access pressure is reduced, and meanwhile, the workload of code development and the workload of maintaining the system platform are reduced.
Drawings
Fig. 1 is a schematic flow chart of an alternative data caching processing method according to an embodiment of the present application;
fig. 2 is an optional flowchart of a data caching processing method according to an embodiment of the present application;
fig. 3 is an alternative architecture diagram of a cache grid platform according to an embodiment of the present disclosure;
fig. 4 is a block diagram of an alternative structure of a cache grid platform according to an embodiment of the present disclosure;
fig. 5 is an optional flowchart of a data caching processing method according to an embodiment of the present application;
fig. 6 is an optional flowchart of a data caching processing method according to an embodiment of the present application;
fig. 7 is an alternative flowchart of a data caching processing method according to an embodiment of the present application;
fig. 8 is an alternative flowchart of a data caching processing method according to an embodiment of the present application;
fig. 9 is an alternative schematic structural diagram of a data cache processing apparatus according to an embodiment of the present application;
fig. 10 is an alternative structural diagram of a cache grid platform according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
It should be appreciated that reference throughout this specification to "an embodiment of the present application" or "an embodiment described previously" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in the embodiments of the present application" or "in the embodiments" in various places throughout this specification are not necessarily all referring to the same embodiments. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
An embodiment of the present application provides a data caching processing method, which is applied to a cache grid platform, and as shown in fig. 1, the method includes the following steps:
step 101, receiving, by a first caching agent of a target service, service request information sent by at least one second caching agent requesting the service.
Wherein the service request information includes a uniform resource locator.
In the embodiment of the present application, the cache agent may be understood as an agent introduced for the microservice process on each service deployment node by the cache grid platform, so that the cache agent and the microservice process are deployed in a bound manner. The cache agent is used for intercepting all service request information, and the cache agent is also used for forwarding all service request information. Here, the cache agent is also called a side car cache (side car) agent, and the service is also called a microservice.
In the embodiment of the application, the cache agent comprises a network agent, and the network agent is used for taking charge of basic network communication functions of service registration, service discovery, service management and the like in the micro-service environment, so that the micro-service does not pay attention to the basic capability of network communication.
In this embodiment of the present application, the first caching agent is an agent for receiving the service request information, the second caching agent is an agent for sending the service request information, and the number of the second caching agents for sending the service request information may be multiple, and the first caching agent and the multiple second caching agents are connected by a network to form a mesh communication mesh in the caching mesh platform. Here, the first Cache agent and the service corresponding to the first Cache agent, and the at least one first Cache agent and the service corresponding to the first Cache agent form a Cache Mesh, and the Cache Mesh platform includes a Cache Mesh (Cache Mesh).
In the embodiment of the application, the service request information is request information for acquiring request data associated with the service from a target service for at least one request service. The service request information includes a Uniform Resource Locator (URL), which is a representation method for specifying the information location on the web service program of the internet.
In the embodiment of the application, the cache grid platform receives the service request information sent by at least one second cache agent requesting the service through a first cache agent of a target service, and analyzes the service request information through the second cache agent to obtain the uniform resource locator included in the service request information, so that the first cache agent processes according to the uniform resource locator.
In an application scenario that can be realized, in the process of constructing the cache grid platform, the service grid is used as an infrastructure layer of a micro-service technology architecture in the cache grid platform, so that functions of network communication and the like in the cache grid platform are more flexible, more reliable and faster.
And 102, determining a target request interface corresponding to the uniform resource locator, and if a target cache processing process configured by the target request interface is in a starting state, calling the target cache processing process through a first cache agent to process request data in a cache database requested by service request information.
In the embodiment of the application, each service is provided with a plurality of request interfaces, and the cache management center in the cache grid platform configures corresponding interface configuration information and cache configuration information for each request interface. It should be noted that the interface configuration information includes a use state of a cache processing process requesting interface configuration; the using state of the cache processing process comprises an enabling state and a disabling state.
In the embodiment of the present application, the cache database is a database that stores data in the form of key-value pairs (key-value), and the cache database is also called a cache storage cluster.
In the embodiment of the application, the cache grid platform obtains, through a first cache agent of a target service, a uniform resource locator included in service request information and determines a target request interface corresponding to the uniform resource locator based on the uniform resource locator, when receiving the service request information sent by at least one second cache agent requesting the service through the first cache agent. Further, target interface configuration information configured for the target request interface by a cache management center in the cache grid platform is obtained through the first cache agent, the target interface configuration information includes that the use state of a target cache processing process configured for the target request interface is in an enabled state, and the target cache processing process is called through the first cache agent so that the target cache processing process processes request data in a cache database requested by the service request information.
The embodiment of the application provides a data caching processing method, which comprises the steps of receiving service request information sent by at least one second caching agent requesting service through a first caching agent of a target service, wherein the service request information comprises a uniform resource locator; determining a target request interface corresponding to the uniform resource locator, if a target cache processing process configured by the target request interface is in an enabled state, calling the target cache processing process through a first cache agent, and processing request data in a cache database requested by service request information; therefore, the target cache processing process is configured on the target request interface and is in a starting state, the target cache processing process is called through the first cache agent, the request data in the cache database requested by the service request information is processed, the data access speed is accelerated through the cache, the load pressure of rear-end application and data storage is reduced, the performance of the system is improved, the data reading degree is improved, the stored access pressure is reduced, and meanwhile, the workload of code development and the workload of maintaining the system platform are reduced.
An embodiment of the present application provides a data caching processing method, which is applied to a cache grid platform, and as shown in fig. 2, the method includes the following steps:
step 201, receiving, by a first caching agent of a target service, service request information sent by at least one second caching agent requesting the service.
Wherein the service request information includes a uniform resource locator.
Step 202, resolving the uniform resource locator, and determining a transmission protocol and a corresponding interface path adopted by the uniform resource address.
In the embodiment of the application, the transmission protocol is a rule for mutual communication between the browser and the web server, and is a data transmission protocol for transmitting the web document through the internet. The transmission protocol comprises a transmission protocol type and a requested method in the transmission protocol; here, the transfer Protocol types include a Hypertext transfer Protocol (http), a Secure Hypertext transfer Protocol (https), and a Real-time Transport Protocol (RTP); the Transmission Protocol types also include Transmission Control Protocol (TCP) and Remote Procedure Call (RPC); the method requested in the transmission protocol comprises a GET request method, a POST request method, a PUT request method and a DELETE request method; the GET is used for acquiring or inquiring data resource information, the POST is used for updating the data resource information, the PUT is used for increasing the data resource information, and the DELETE is used for deleting the data resource information. It should be noted that, for the transmission protocol, the application is not limited in particular.
In the embodiment of the present application, the interface path is location information of an interface provided by the cache grid platform for the service.
In this embodiment of the present application, a cache management center in a cache grid platform configures, for each request interface, corresponding interface configuration information and cache configuration information, where the interface configuration information includes an interface Identifier (ID), a service identifier (service ID), an interface name, a transmission protocol, an interface path, a metadata configuration rule, a use state of a cache processing process, and a cache event type. The cache configuration information includes a serial number, an interface identifier, a cache event type, a cache service identifier, a service primary key construction rule, and a tenant identifier (tententid). It should be noted that, the interface configuration information and the cache configuration information correspond to each other one to one. As an example, as shown in table 1,
Figure BDA0003311373940000071
in the embodiment of the application, a cache management center of a cache grid platform configures corresponding interface configuration information and cache configuration information for each request interface to achieve the purpose of decoupling cache logic and business service logic, further achieve the cache capacity of each request interface and the complete independence of the deployment of each request interface, and is completely transparent and unaware to business services, so that business service development does not pay attention to cache related logic any more, and further accelerate the access speed of data, thereby reducing the load pressure of back-end application and data storage, improving the performance of a system, improving the reading number of data, and reducing the access pressure of storage; meanwhile, the updating operation of the interface configuration information and the cache configuration information of the service can be dynamically carried out in the cache management center under the condition of not stopping the service process.
In other embodiments of the present application, referring to fig. 3 and fig. 4, fig. 3 is a schematic diagram illustrating an alternative architecture of a cache grid platform provided in an embodiment of the present application, and fig. 4 is a block diagram illustrating an alternative structure of the cache grid platform provided in the embodiment of the present application. The cache grid platform 4 includes a cache management center 41, a cache grid 42 and a cache database 43, where the cache database 43 is also called a cache storage cluster.
The cache management center 41 is a centralized cache management service, and implements overall management and configuration of the cache grid. The cache management center 41 includes a cache telemetry processing module 411, an information management module 412, an information synchronization module 413, a cache statistics monitoring module 414, and a node deployment management module 415. The cache telemetry processing module 411 is configured to perform unified monitoring and management on telemetry data, such as running conditions and heartbeats, of respective service systems uploaded by the cache agent, so as to determine a communication state of each cache node according to the telemetry data, and further control a use state of a cache processing process of a service corresponding to each request interface based on the communication state of each cache node, thereby ensuring a service running capability. The information management module 412 is configured to manage configuration information of the interface for each service request interface and configuration information of the cache processing process corresponding to the request interface. The information synchronization module 413 is configured to issue and synchronize the interface configuration information and the cache configuration information to the cache agents of the services included in the cache grid 42. The cache statistics monitoring module 414 is configured to monitor the cached data information. The node deployment management module 415 is configured to deploy and manage nodes in the cache grid.
The cache grid 42 includes a plurality of services (also referred to as micro services) and a cache agent corresponding to each of the plurality of services, wherein the plurality of cache agents includes a first cache agent and a second cache agent. The service node where each service is located adopts a deployment mode of a cache agent to bind and deploy the cache agent and the micro-service process, wherein the cache agent is used for receiving all network requests of the micro-service process, and the network agent in the cache agent is responsible for basic network communication functions of service registration, service discovery, service management and the like in the micro-service environment, so that the micro-service does not pay attention to basic capability of network communication. In addition, the cache grid adds interface configuration information and cache configuration information configured for the requesting interface in each served cache agent. The cache agent judges the request service information through the interface configuration information and the cache configuration information, if the request service information hits the request data in the cache database, the cache agent directly returns the request data under the condition of service imperceptibility, namely, the cache agent does not request the micro-service any more.
It should be noted that, under the micro service architecture, there are many service nodes, and each caching agent has interface configuration information and caching configuration information of each request interface. The interface configuration information and the cache configuration information of each request interface in the cache agent can be obtained through the following three ways:
the first method is as follows: when the service node deployment is started, the cache agent pulls corresponding interface configuration information and cache configuration information from the cache configuration center according to the deployed service ID or application ID, and the initial loading of the cache configuration of the service node is completed.
The second method comprises the following steps: the cache agent can access the cache configuration center at fixed time intervals according to the service ID or the application ID so as to pull up the latest interface configuration information and cache configuration information.
The third method comprises the following steps: after updating the interface configuration information and the cache configuration information each time, the cache configuration center pushes the latest interface configuration information and cache configuration information for the cache agent according to information such as a service ID or an application ID, a service Internet Protocol (IP), and the like. Therefore, the management and the issuing of the configuration information are ensured to be realized through the centralized cache management center, when the interface configuration information and/or the cache configuration information in the cache management center are changed, the cache agent is not required to be restarted, and the cache agent can pull the latest interface configuration information and the latest cache configuration information from the cache management center to complete the dynamic configuration and the dynamic updating. In addition, the cache agent reports the running condition of each service system, the telemetering data of heartbeat and the like to the cache management center for unified management and monitoring.
The cache database 43 is also called a cache storage cluster, the cache storage cluster is used as a cache storage layer of a cache grid, an adaptive interface for cache storage is provided in the cache agent of each service node, and the operation of cache data in the cache storage cluster is realized through the interface.
In the embodiment of the application, the cache grid platform receives the service request information sent by at least one second cache agent requesting the service through a first cache agent of a target service, analyzes the uniform resource locator in the service request information, and determines a transmission protocol and a corresponding interface path adopted by a uniform resource address.
In other embodiments of the present Application, the caching grid platform embeds a WebAssembly engine in the service of each caching agent, and defines an Application Programming Interface (API) inside the caching agent to interact with the caching agent. The cache management center can dynamically encode and expand each environment for cache processing of the cache agent, theoretically support all development languages, finally issue the development languages to the service of the cache agent in a WASM byte code mode, realize dynamic switching of code logic under the condition of not stopping the service through a WASM engine, ensure that the cache capacity is efficient, dynamic, flexible and configurable, realize the support of micro-service scenes of multiple development languages and cross-technology stacks, and realize the dynamic configuration of the cache function under the condition of not restarting the service, so that the cache function really becomes a universal infrastructure capacity, the cache capacity is completely transparent and unaware to the service logic, and the development, deployment and maintenance costs of the service cache function are greatly reduced.
Here, Webassembly (WASM) is a front-end technology for solving the performance problem of the front-end script language, the WASM is a byte code standard, and the WASM technology can perform the byte code compiled by the language executed by the WASM engine (similar to the JAVA virtual machine in function) and compile the byte code into machine code of a cross-CPU architecture, so as to realize the cross-platform capability.
Step 203, determining a target request interface matched with the transmission protocol and the interface path.
In the embodiment of the application, the cache grid platform determines a target request interface matched with a transmission protocol and an interface path through a first cache agent.
And step 204, acquiring target interface configuration information configured for the target request interface by the cache management center.
In the embodiment of the application, the target interface configuration information is interface information configured by the cache management center for the target request interface. Target interface configuration information includes, but is not limited to, interface name, transport protocol, interface path, metadata configuration rules, and usage status of the cache processing process.
Step 205, if the target cache processing process configured by the target request interface is determined to be in the enabled state based on the target interface configuration information, obtaining the target cache configuration information configured by the cache management center for the target cache processing process.
In the embodiment of the present application, the target cache configuration information is cache information configured by the cache management center for the target cache processing process. The target cache configuration information includes, but is not limited to, an interface identification, a cache event type, a cache service identification, and a business primary key construction rule.
In the embodiment of the application, if the cache management platform determines that the use state of the target cache processing process configured by the target request interface is in the enabled state based on the target interface configuration information, the target cache configuration information configured for the target cache processing process by the cache management center is obtained.
And step 206, based on the target interface configuration information and the target cache configuration information, calling a target cache processing process through the first cache agent, and processing the request data in the cache database requested by the service request information.
In this embodiment of the application, referring to fig. 5, in step 206, based on the target interface configuration information and the target cache configuration information, a target cache processing process is called by a first cache agent, and request data in a cache database requested by service request information is processed, which may be implemented by the following steps:
step 2061, obtaining a metadata configuration rule configured by the cache management center for the target request interface, and a service key construction rule configured for the target cache processing process.
The target interface configuration information comprises a metadata configuration rule, and the target cache configuration information comprises a service main key construction rule.
In the embodiment of the present application, the metadata configuration rule is a rule set for extracting service data included in the service request information. Here, if the service data is transmitted in a form format, the metadata configuration rule includes a rule for extracting the service data transmitted in the form format; if the service data is transmitted in the uniform resource locator, the metadata configuration rule comprises a rule for extracting the service data transmitted in the uniform resource locator. For example, the format of the service request message in which the service data is included in the uniform resource locator for delivery may be http/GET/api/CacheMesh/test? And the orderid is 123, wherein the service data is the orderid is 123.
In the embodiment of the present application, the service primary key construction rule is used to construct a rule set by the service primary key, and for example, the service primary key construction rule may be key _ + # id. The key represents the service, and the id represents the object identifier of the object to be operated corresponding to the service.
In the embodiment of the application, a cache management center in a cache grid platform configures corresponding target interface configuration information and target cache configuration information for a target request interface, the target interface configuration information includes a metadata configuration rule, and the target cache configuration information includes a service master key construction rule. At this time, the cache grid platform obtains a metadata configuration rule in the target interface configuration information configured for the target request interface by the cache management center, and obtains a service key construction rule in the target cache configuration information configured for the target cache processing process.
Step 2062, extracting the service metadata from the service request information according to the metadata configuration rule.
Step 2063, according to the business primary key construction rule, constructing the business primary key of the business metadata.
In the embodiment of the application, after the cache grid platform obtains the metadata configuration rule configured by the cache management center for the target request interface and the service primary key construction rule configured for the target cache processing process through the first cache agent, the cache grid platform extracts the service metadata from the service request information according to the metadata configuration rule through the first cache agent, and constructs the service primary key of the service metadata according to the service primary key construction rule.
In an implementation application scenario, taking the example that the service data included in the service request information is passed in the uniform resource locator, http/GET/api/CacheMesh/testorid 123 is an order for querying id 123; http/GET/api/CacheMesh/testproduced 123, which is the item used for querying id 123. At this time, the cache grid platform extracts the service metadata from the service request information through the first cache agent according to the metadata configuration rule, and obtains that the service metadata of the order service is orderid 123, and the service metadata of the commodity service is produced 123; further, the cache grid platform constructs the service primary key of the service metadata corresponding to the order service as order _ + #123 and the service primary key of the service metadata corresponding to the commodity service as product _ + #123 through the first cache agent according to the service primary key rule.
Step 2064, the target cache processing process is called through the first cache agent, and the request data in the cache database is processed based on the service main key.
In the embodiment of the present application, processing request data in a cache database based on a service primary key includes: and based on the service primary key, inquiring the request data in the cache database, or deleting the request data in the cache database.
In this embodiment of the present application, step 2064 calls a target cache processing process through the first cache agent, and processes the requested data in the cache database based on the service key, and may be implemented in the following two ways:
the first method is as follows: if the cache event type configured for the target cache processing process by the cache management center is a cache write-in type, calling the target cache processing process through the first cache agent, and searching request data corresponding to the service main key from the cache database.
Wherein the target cache configuration information comprises a cache event type.
In the embodiment of the present application, the cache write type may be denoted as Cacheable.
In the embodiment of the application, if the cache event type configured for the target cache processing process by the cache management center is a cache write-in type, the target cache processing process is called by the first cache agent, and the request data corresponding to the service primary key is searched from the cache database, namely cache hit; meanwhile, the first caching agent sends request data aiming at the service request information to the second caching agent; therefore, the first cache agent directly acquires the request data aiming at the service request information in the cache database when the first cache agent hits the cache data in the cache database through the service main key request, the target service is not requested any more, namely the target service is not perceived, and meanwhile, the higher the cache hit rate of the cache data in the cache database is, the higher the cache utilization rate is.
The second method comprises the following steps: and if the cache event type configured for the target cache processing process by the cache management center is a cache cleaning type, calling the target cache processing process through the first cache agent, and deleting the request data corresponding to the service main key from the cache database.
In the embodiment of the present application, the cache flush type may be represented as CacheEvict.
In this embodiment, referring to fig. 6, in step 2064, a target cache processing process is called by a first cache agent, and request data in a cache database is processed based on a service primary key, which may be implemented by the following steps:
step A1, if the target cache processing process is called through the first cache agent, the request data corresponding to the service primary key is not found from the cache database, and the first calling request is sent to the target service through the first cache agent.
The first calling request is used for requesting to call a target service, and the request data corresponding to the business main key is searched in the relational database.
In the embodiment of the application, the priority of the cache database is higher than that of the relational database.
In the embodiment of the application, after determining the service primary key of the service metadata through the first cache agent, the cache grid platform firstly calls a target cache processing process through the first cache agent, searches the request data corresponding to the service primary key from the cache database with a high priority, and if the request data corresponding to the service primary key is not found from the cache database, the request data is sent to the target service through the first cache agent and is used for requesting to call the target service, so that the target service searches a first call request of the request data corresponding to the service primary key from the relational data.
And step A2, receiving request data fed back by the target service for the first call request.
In the embodiment of the application, after the cache grid platform sends the first call request to the target service through the first cache agent, the target service searches request data corresponding to the service main key from the relational data based on the first call request, and sends the request data acquired aiming at the first call request to the first cache agent; and the cache grid platform receives the request data fed back by the target service aiming at the first call request through the first cache agent, and simultaneously, the first cache agent sends the request data aiming at the service request information to the second cache agent.
Step A3, sending the request data to the cache database to store the request data in the cache database.
In the embodiment of the application, after receiving request data fed back by a target service for a first call request through a first cache agent, the cache grid platform sends the request data to a cache database so as to store the request data into the cache database.
As can be seen from the above, in the present application, interface configuration information and cache configuration information are set for each request interface, and further, a service primary key in service request information is obtained based on a transmission protocol, an interface path, a metadata configuration rule, a use state of a target cache processing process, and a cache event type in the interface configuration information, and a service primary key construction rule in the cache configuration information; searching request data corresponding to the service main key from a cache database with a high priority level based on the service main key; and if the request data corresponding to the business main key is not searched in the cache database, further requesting to call the target service, so that the target service searches the request data corresponding to the business main key from the relational database. Therefore, the access speed of the data is increased through the cache, so that the load pressure of the back-end application and the data storage is reduced, the performance of the system is improved, the reading degree of the data is improved, the storage access pressure is reduced, and meanwhile, more users are served by limited resources; furthermore, the updating operation of the interface configuration information and the cache configuration information of the service can be dynamically carried out in the cache management center under the condition of not stopping the service process.
An embodiment of the present application provides a data caching processing method, which is applied to a cache grid platform, and as shown in fig. 7, the method includes the following steps:
step 301, receiving, by a first caching agent of a target service, service request information sent by at least one second caching agent requesting a service.
Wherein the service request information includes a uniform resource locator.
Step 302, resolving the uniform resource locator, and determining a transmission protocol and a corresponding interface path adopted by the uniform resource address.
Step 303, determining a target request interface matched with the transmission protocol and the interface path.
And step 304, acquiring target interface configuration information configured for the target request interface by the cache management center.
Step 305, if the target cache processing process configured by the target request interface is determined to be in the disabled state based on the target interface configuration information, the second call request is sent to the target service through the first cache agent.
And the second calling request is used for requesting to call the target service and searching the request data corresponding to the service request information from the relational database.
Step 306, receiving request data fed back by the target service for the second call request.
In the embodiment of the application, a cache grid platform receives service request information sent by at least one second cache agent requesting service through a first cache agent, analyzes a uniform resource locator included in the service request information, and determines a transmission protocol and a corresponding interface path adopted by a uniform resource address; further, if the target cache processing process configured by the target request interface is in a forbidden state, sending a second call request to the target service through the first cache agent; then, the target service searches request data corresponding to the service request information from the relational data based on the second call request, sends the request data acquired aiming at the second call request to the first cache proxy, and sends the request data aiming at the service request information to the second cache proxy through the first cache proxy. Therefore, when the target cache processing process configured by the target request interface is in a forbidden state and cannot acquire cache data from the cache database, the cache grid platform sends a second call request to the target service through the first cache agent, so that the target service searches the request data corresponding to the service request information from the relational database.
The embodiment of the present application provides a data caching processing method, which is applied to a cache grid platform, and as shown in fig. 8, the method includes the following steps 401 to 404; or steps 401 to 403, and steps 405 to 408; or step 401 to step 403, step 405 to step 407, and step 409; or step 401 to step 403, step 405, and step 410:
step 401, receiving, by a first caching agent of a target service, service request information sent by at least one second caching agent requesting a service.
Wherein the service request information includes a uniform resource locator.
Step 402, analyzing the uniform resource locator, determining a transmission protocol and a corresponding interface path adopted by the uniform resource address, and determining a target request interface matched with the transmission protocol and the interface path.
And step 403, judging whether the cache management center configures target interface configuration information for the target request interface.
In this embodiment of the present application, if the cache grid platform determines that the cache management center does not configure the target interface configuration information for the target request interface through the first cache agent, step 404 is executed; if it is determined that the cache management center configures the target interface configuration information for the target request interface, step 405 is executed.
Step 404, sending a second call request to the target service through the first cache agent, and receiving request data fed back by the target service for the second call request.
And the second calling request is used for requesting to call the target service and searching the request data corresponding to the service request information from the relational database.
Step 405, if the target cache processing process configured by the target request interface is determined to be in the enabled state based on the target interface configuration information, obtaining the target cache configuration information configured by the cache management center for the target cache processing process, and determining the cache event type configured by the cache management center for the target cache processing process based on the target cache configuration information.
The target cache configuration information includes a cache event type.
Step 406, if the cache event type configured for the target cache processing process by the cache management center is a cache write-in type, obtaining a metadata configuration rule in the target interface configuration information and a service key construction rule in the target cache configuration information; extracting service metadata from the service request information according to a metadata configuration rule; and constructing a service primary key of the service metadata according to the service primary key construction rule.
The target interface configuration information comprises a metadata configuration rule, and the target cache configuration information comprises a service main key construction rule.
Step 407, calling a target cache processing process through the first cache agent, and judging whether the request data for the service primary key in the cache database exists.
In this embodiment of the present application, if the cache grid platform invokes a target cache processing procedure through the first cache agent, and determines that the request data for the service primary key exists in the cache database, step 408 is executed; if the cache grid platform calls the target cache processing process through the first cache agent, and it is determined that the request data for the service primary key in the cache database does not exist, step 409 is executed.
Step 408, searching the request data corresponding to the service primary key from the cache database, and sending the request data aiming at the service request information to the second cache agent.
Step 409, sending the first call request to the target service through the first cache agent, and receiving request data fed back by the target service for the first call request.
The first calling request is used for requesting to call a target service, and the request data corresponding to the business main key is searched in the relational database.
Step 410, if the cache event type configured by the cache management center for the target cache processing process is a cache cleaning type, obtaining a metadata configuration rule in the target interface configuration information and a service key construction rule in the target cache configuration information; extracting service metadata from the service request information according to a metadata configuration rule; and according to the business primary key construction rule, constructing a business primary key of the business metadata, calling a target cache processing process, and deleting the request data corresponding to the business primary key from the cache database.
The target interface configuration information comprises a metadata configuration rule, and the target cache configuration information comprises a service main key construction rule.
Therefore, the method and the device have the advantages that the data access speed is increased through the cache, so that the load pressure of the back-end application and the data storage is reduced, the performance of the system is improved, the data reading degree is improved, the storage access pressure is reduced, and meanwhile, more users are served by limited resources; furthermore, the updating operation of the interface configuration information and the cache configuration information of the service can be dynamically carried out in the cache management center under the condition of not stopping the service process.
It should be noted that, for the descriptions of the same steps and the same contents in this embodiment as those in other embodiments, reference may be made to the descriptions in other embodiments, which are not described herein again.
An embodiment of the present application provides a data cache processing apparatus, where the data cache processing apparatus may be applied to a data cache processing method provided in embodiments corresponding to fig. 1 to 2 and fig. 5 to 7, and as shown in fig. 9, the data cache processing apparatus 9 includes:
a receiving module 901, configured to receive, through a first cache agent of a target service, service request information sent by at least one second cache agent requesting the service, where the service request information includes a uniform resource locator;
a determining module 902, configured to determine a target request interface corresponding to a uniform resource locator;
the processing module 903 is configured to, if the target cache processing process configured by the target request interface is in an enabled state, invoke the target cache processing process through the first cache agent, and process the request data in the cache database requested by the service request information.
In other embodiments of the present application, the data cache processing apparatus 9 further includes an obtaining module, where the obtaining module is configured to obtain target interface configuration information configured by the cache management center for the target request interface; if the target cache processing process configured by the target request interface is in the starting state based on the target interface configuration information, acquiring target cache configuration information configured by the cache management center for the target cache processing process; the processing module 903 is further configured to invoke a target cache processing process through the first cache agent based on the target interface configuration information and the target cache configuration information, and process the request data in the cache database requested by the service request information.
In other embodiments of the present application, the obtaining module is further configured to obtain a metadata configuration rule configured by the cache management center for the target request interface, and a service primary key construction rule configured for the target cache processing process, where the target interface configuration information includes the metadata configuration rule, and the target cache configuration information includes the service primary key construction rule; extracting service metadata from the service request information according to a metadata configuration rule; constructing a service primary key of the service metadata according to a service primary key construction rule; and calling a target cache processing process through the first cache agent, and processing the request data in the cache database based on the service main key.
In other embodiments of the present application, the processing module 903 is further configured to, if the cache event type configured by the cache management center for the target cache processing process is a cache write-in type, invoke the target cache processing process through the first cache agent, and search for the request data corresponding to the service primary key from the cache database, where the target cache configuration information includes the cache event type.
In other embodiments of the present application, the processing module 903 is further configured to, if the cache event type configured by the cache management center for the target cache processing process is a cache cleaning type, invoke the target cache processing process through the first cache agent, and delete the request data corresponding to the service key from the cache database.
In other embodiments of the present application, the processing module 903 is further configured to send a first call request to the target service through the first cache agent if the target cache processing process is called through the first cache agent, and the request data corresponding to the service primary key is not found in the cache database; the first calling request is used for requesting to call a target service, and the request data corresponding to the business main key is searched in the relational database; the receiving module 901 is further configured to receive request data fed back by the target service for the first invocation request; and the sending module is used for sending the request data to the cache database so as to store the request data in the cache database.
In other embodiments of the present application, the processing module 903 is further configured to determine that a target cache processing process configured by a target request interface is in a disabled state based on the target interface configuration information, and send a second call request to the target service through the first cache agent, where the second call request is used to request to call the target service, and search request data corresponding to the service request information from the relational database; the receiving module 901 is further configured to receive request data fed back by the target service for the second invocation request.
In other embodiments of the present application, the processing module 903 is further configured to parse the uniform resource locator, and determine a transmission protocol and a corresponding interface path used by the uniform resource address; the determining module 902 is further configured to determine a target request interface matching the transmission protocol and the interface path.
An embodiment of the present application provides a cache grid platform, where the cache grid platform may be used to implement a data cache processing method provided in embodiments corresponding to fig. 1 to fig. 2 and fig. 5 to fig. 7, and as shown in fig. 10, the cache grid platform 4 includes: a cache management center 41, a cache grid 42, and a cache database 43; the cache management center 41 comprises an information management module 411 and an information synchronization module 412;
the information management module 411 is configured to configure interface configuration information for a request interface and configure cache configuration information for a cache processing process corresponding to the request interface; the information synchronization module 412 is configured to issue and synchronize the interface configuration information and the cache configuration information to the cache agent of each service included in the cache grid;
the caching grid 42 comprises a plurality of services and a caching agent corresponding to each service in the plurality of services, wherein the plurality of caching agents comprise a first caching agent and a second caching agent;
the first caching agent is used for receiving service request information sent by at least one second caching agent requesting service, wherein the service request information comprises a uniform resource locator;
determining a target request interface corresponding to the uniform resource locator, if a target cache processing process configured by the target request interface is in an enabled state, calling the target cache processing process through a first cache agent, and processing request data in a cache database requested by service request information;
the cache database is used for storing the request data requested by the service request information.
Based on the foregoing embodiments, embodiments of the present application provide a computer storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement the steps of:
receiving service request information sent by at least one second cache agent requesting service through a first cache agent of a target service, wherein the service request information comprises a uniform resource locator;
and determining a target request interface corresponding to the uniform resource locator, and if the target cache processing process configured by the target request interface is in an enabled state, calling the target cache processing process through the first cache agent to process the request data in the cache database requested by the service request information.
In other embodiments of the present application, the one or more programs are executable by the one or more processors and further implement the steps of:
acquiring target interface configuration information configured for a target request interface by a cache management center; if the target cache processing process configured by the target request interface is in the starting state based on the target interface configuration information, acquiring target cache configuration information configured by the cache management center for the target cache processing process; and calling a target cache processing process through the first cache agent based on the target interface configuration information and the target cache configuration information, and processing the request data in the cache database requested by the service request information.
In other embodiments of the present application, the one or more programs are executable by the one or more processors and further implement the steps of:
acquiring a metadata configuration rule configured for a target request interface by a cache management center and a business main key construction rule configured for a target cache processing process, wherein target interface configuration information comprises the metadata configuration rule, and target cache configuration information comprises the business main key construction rule; extracting service metadata from the service request information according to a metadata configuration rule; constructing a service primary key of the service metadata according to a service primary key construction rule; and calling a target cache processing process through the first cache agent, and processing the request data in the cache database based on the service main key.
In other embodiments of the present application, the one or more programs are executable by the one or more processors and further implement the steps of:
if the cache event type configured for the target cache processing process by the cache management center is a cache write-in type, calling the target cache processing process through the first cache agent, and searching request data corresponding to the service main key from the cache database, wherein the target cache configuration information comprises the cache event type.
In other embodiments of the present application, the one or more programs are executable by the one or more processors and further implement the steps of:
and if the cache event type configured for the target cache processing process by the cache management center is a cache cleaning type, calling the target cache processing process through the first cache agent, and deleting the request data corresponding to the service main key from the cache database.
In other embodiments of the present application, the one or more programs are executable by the one or more processors and further implement the steps of:
if the target cache processing process is called through the first cache agent, the request data corresponding to the service main key is not found in the cache database, and a first calling request is sent to the target service through the first cache agent; the first calling request is used for requesting to call a target service, and the request data corresponding to the business main key is searched in the relational database; receiving request data fed back by the target service aiming at the first calling request; and sending the request data to a cache database to store the request data in the cache database.
In other embodiments of the present application, the one or more programs are executable by the one or more processors and further implement the steps of:
if the target cache processing process configured by the target request interface is determined to be in a forbidden state based on the target interface configuration information, sending a second calling request to the target service through the first cache agent, wherein the second calling request is used for requesting to call the target service, and searching request data corresponding to the service request information from the relational database; and receiving request data fed back by the target service for the second call request.
In other embodiments of the present application, the one or more programs are executable by the one or more processors and further implement the steps of:
analyzing the uniform resource locator, and determining a transmission protocol and a corresponding interface path adopted by the uniform resource address; a target request interface matching the transport protocol and the interface path is determined.
The computer storage medium/Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a magnetic Random Access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read-Only Memory (CD-ROM); but may also be various terminals such as mobile phones, computers, tablet devices, personal digital assistants, etc., that include one or any combination of the above-mentioned memories.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing module, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit. Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media capable of storing program codes, such as a removable Memory device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, and an optical disk.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. A data caching method, the method comprising:
receiving service request information sent by at least one second cache agent requesting service through a first cache agent of a target service, wherein the service request information comprises a uniform resource locator;
and determining a target request interface corresponding to the uniform resource locator, and if a target cache processing process configured by the target request interface is in a starting state, calling the target cache processing process through the first cache agent to process the request data in the cache database requested by the service request information.
2. The method according to claim 1, wherein if a target cache processing process configured by the target request interface is in an enabled state, the first cache agent invokes the target cache processing process to process the request data in the cache database requested by the service request information, including:
acquiring target interface configuration information configured for the target request interface by a cache management center;
if the target cache processing process configured by the target request interface is determined to be in the starting state based on the target interface configuration information, acquiring the target cache configuration information configured for the target cache processing process by the cache management center;
and based on the target interface configuration information and the target cache configuration information, calling the target cache processing process through the first cache agent to process the request data in the cache database requested by the service request information.
3. The method according to claim 2, wherein the invoking, by the first caching agent, the target caching process based on the target interface configuration information and the target caching configuration information to process the requested data in the caching database requested by the service request information comprises:
acquiring a metadata configuration rule configured for the target request interface by the cache management center and a service key construction rule configured for the target cache processing process, wherein the target interface configuration information comprises the metadata configuration rule, and the target cache configuration information comprises the service key construction rule;
extracting service metadata from the service request information according to the metadata configuration rule;
constructing a service primary key of the service metadata according to the service primary key construction rule;
and calling the target cache processing process through the first cache agent, and processing the request data in the cache database based on the service main key.
4. The method of claim 3, wherein the invoking, by the first caching agent, the target caching process to process the requested data in the caching database based on the service primary key comprises:
if the cache event type configured for the target cache processing process by the cache management center is a cache write-in type, calling the target cache processing process through the first cache agent, and searching the request data corresponding to the service main key from the cache database, wherein the target cache configuration information includes the cache event type.
5. The method of claim 4, further comprising:
if the cache event type configured for the target cache processing process by the cache management center is a cache cleaning type, the target cache processing process is called by the first cache agent, and the request data corresponding to the service main key is deleted from the cache database.
6. The method of claim 3, wherein the invoking, by the first caching agent, the target caching process to process the requested data in the caching database based on the service primary key comprises:
if the target cache processing process is called through the first cache agent, the request data corresponding to the service main key is not found in the cache database, and a first calling request is sent to the target service through the first cache agent; the first calling request is used for requesting to call a target service, and the request data corresponding to the service main key is searched from a relational database;
receiving the request data fed back by the target service for the first call request;
and sending the request data to the cache database to store the request data in the cache database.
7. The method of claim 2, further comprising:
if the target cache processing process configured by the target request interface is determined to be in a forbidden state based on the target interface configuration information, sending a second calling request to the target service through the first cache agent, wherein the second calling request is used for requesting to call the target service, and searching request data corresponding to the service request information from a relational database;
receiving the request data fed back by the target service for the second call request.
8. The method according to any one of claims 1 to 7, wherein the determining a target request interface corresponding to the uniform resource locator comprises:
analyzing the uniform resource locator, and determining a transmission protocol and a corresponding interface path adopted by the uniform resource address;
determining the target request interface matching the transport protocol and the interface path.
9. A data cache processing apparatus, the apparatus comprising:
the receiving module is used for receiving service request information sent by at least one second cache agent requesting service through a first cache agent of a target service, wherein the service request information comprises a uniform resource locator;
the determining module is used for determining a target request interface corresponding to the uniform resource locator;
and the processing module is used for calling the target cache processing process through the first cache agent and processing the request data in the cache database requested by the service request information if the target cache processing process configured by the target request interface is in the starting state.
10. A caching grid platform, comprising: the system comprises a cache management center, a cache grid and a cache database; the cache management center comprises an information management module and an information synchronization module;
the information management module is used for configuring interface configuration information for a request interface and configuring cache configuration information for a cache processing process corresponding to the request interface;
the information synchronization module is used for issuing and synchronizing the interface configuration information and the cache configuration information to the cache agents of each service included in the cache grid;
the cache grid comprises a plurality of services and a cache agent corresponding to each service in the plurality of services, wherein the plurality of cache agents comprise a first cache agent and a second cache agent;
the first caching agent is used for receiving service request information sent by at least one second caching agent requesting service, wherein the service request information comprises a uniform resource locator;
determining a target request interface corresponding to the uniform resource locator, if a target cache processing process configured by the target request interface is in a starting state, calling the target cache processing process through the first cache agent, and processing request data in a cache database requested by the service request information;
the cache database is used for storing the request data requested by the service request information.
11. A storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement a data cache processing method according to any one of claims 1 to 8.
CN202111217735.6A 2021-10-19 2021-10-19 Data caching processing method and device, caching grid platform and storage medium Pending CN113973135A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111217735.6A CN113973135A (en) 2021-10-19 2021-10-19 Data caching processing method and device, caching grid platform and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111217735.6A CN113973135A (en) 2021-10-19 2021-10-19 Data caching processing method and device, caching grid platform and storage medium

Publications (1)

Publication Number Publication Date
CN113973135A true CN113973135A (en) 2022-01-25

Family

ID=79587631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111217735.6A Pending CN113973135A (en) 2021-10-19 2021-10-19 Data caching processing method and device, caching grid platform and storage medium

Country Status (1)

Country Link
CN (1) CN113973135A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114449040A (en) * 2022-01-28 2022-05-06 杭州迪普科技股份有限公司 Configuration issuing method and device based on cloud platform

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1791213A (en) * 2005-12-27 2006-06-21 北京邮电大学 Mobile video order service system with optimized performance and realizing method
CN103248684A (en) * 2013-04-28 2013-08-14 北京奇虎科技有限公司 Resource acquiring method and device on Internet
CN105208100A (en) * 2015-08-25 2015-12-30 联创车盟汽车服务有限公司 Interface data processing method
WO2017020743A1 (en) * 2015-08-06 2017-02-09 阿里巴巴集团控股有限公司 Method and device for sharing cache data
CN110198333A (en) * 2018-04-18 2019-09-03 腾讯科技(深圳)有限公司 Data capture method and device, storage medium and electronic device
WO2020231502A1 (en) * 2019-05-14 2020-11-19 Level 3 Communications, Llc Cache management in content delivery systems
US20200396309A1 (en) * 2019-06-17 2020-12-17 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for data processing
CN112685193A (en) * 2020-12-18 2021-04-20 易久批信息技术有限公司 Micro-service gateway interface exposure and aggregation method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1791213A (en) * 2005-12-27 2006-06-21 北京邮电大学 Mobile video order service system with optimized performance and realizing method
CN103248684A (en) * 2013-04-28 2013-08-14 北京奇虎科技有限公司 Resource acquiring method and device on Internet
WO2017020743A1 (en) * 2015-08-06 2017-02-09 阿里巴巴集团控股有限公司 Method and device for sharing cache data
CN105208100A (en) * 2015-08-25 2015-12-30 联创车盟汽车服务有限公司 Interface data processing method
CN110198333A (en) * 2018-04-18 2019-09-03 腾讯科技(深圳)有限公司 Data capture method and device, storage medium and electronic device
WO2020231502A1 (en) * 2019-05-14 2020-11-19 Level 3 Communications, Llc Cache management in content delivery systems
US20200396309A1 (en) * 2019-06-17 2020-12-17 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for data processing
CN112685193A (en) * 2020-12-18 2021-04-20 易久批信息技术有限公司 Micro-service gateway interface exposure and aggregation method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王胜;杨超;崔蔚;黄高攀;张明明;: "基于MongoDB的分布式缓存", 计算机***应用, no. 04 *
马邦阳;魏伟刚;浦云明;尤志宁;王巍;陈凯萌;: "基于ATS的直播加速缓存***设计", 集美大学学报(自然科学版), no. 02 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114449040A (en) * 2022-01-28 2022-05-06 杭州迪普科技股份有限公司 Configuration issuing method and device based on cloud platform
CN114449040B (en) * 2022-01-28 2023-12-05 杭州迪普科技股份有限公司 Configuration issuing method and device based on cloud platform

Similar Documents

Publication Publication Date Title
US8312451B2 (en) Computing system for providing software components on demand to a mobile device
CN101188566B (en) A method and system for data buffering and synchronization under cluster environment
EP3837604B1 (en) In situ triggered function as a service within a service mesh
WO2009062414A1 (en) Integrate client and method of desktop application communicating with net web application
CN101360127A (en) File updating method and transmission system
US20060259523A1 (en) System and method of synchronization of internal data cache with wireless device application data repositories
CN111258723B (en) Transaction processing method, device, system, medium and equipment of distributed system
CN111491037A (en) Communication method with object storage server through SFTP data stream
CN114090388A (en) Information acquisition method, server cluster, server and equipment
CN114448686B (en) Cross-network communication device and method based on micro-service
CN111385325B (en) File distribution system and method based on P2P
CN113973135A (en) Data caching processing method and device, caching grid platform and storage medium
CN116204239A (en) Service processing method, device and computer readable storage medium
CN114629904A (en) Distributed event processing method, system, device and medium
KR100833494B1 (en) Apparatus for lightweighting application through duplicating management of application in environment of OSGi middleware for embedded terminal and method thereof
CN116974948B (en) Service system testing method, system, equipment and medium
CN107968798A (en) A kind of network management resources label acquisition method, cache synchronization method, apparatus and system
CN113746894A (en) Method and device for realizing load balancing based on HDFS RBF routing layer
CN111083182B (en) Distributed Internet of things equipment management method and device
CN113378095A (en) Dynamic loading method, device and equipment of signature algorithm and storage medium
CN107193989B (en) NAS cluster cache processing method and system
CN114830105A (en) Data reading method and terminal
CN117135156B (en) Edge cluster nano-tube method, system, computer readable storage medium and electronic equipment based on publish/subscribe message protocol
CN116389252B (en) Method, device, system, electronic equipment and storage medium for updating container network
JP2001067325A (en) Method and system for managing distributed object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination