CN114374657A - Data processing method and device - Google Patents

Data processing method and device Download PDF

Info

Publication number
CN114374657A
CN114374657A CN202210005424.1A CN202210005424A CN114374657A CN 114374657 A CN114374657 A CN 114374657A CN 202210005424 A CN202210005424 A CN 202210005424A CN 114374657 A CN114374657 A CN 114374657A
Authority
CN
China
Prior art keywords
request data
interface
queue
processing
threshold
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210005424.1A
Other languages
Chinese (zh)
Inventor
白永伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Holding Co Ltd
Original Assignee
Jingdong Technology Holding Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Holding Co Ltd filed Critical Jingdong Technology Holding Co Ltd
Priority to CN202210005424.1A priority Critical patent/CN114374657A/en
Publication of CN114374657A publication Critical patent/CN114374657A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling

Abstract

The invention discloses a data processing method and device, and relates to the technical field of computers. One embodiment of the method comprises: intercepting the request data, and updating the queue water level of an interface corresponding to the request data; the queue is used for recording the number of the requests to be processed of the interface, and the queue water level is used for representing the size of the queue capacity occupied by the number of the requests to be processed in the queue; comparing the queue water level of the interface with a preset first threshold and a preset second threshold, and determining the processing mode of the request data according to the comparison result; the request data is processed based on the processing mode of the request data. The implementation method can realize the protection of the system on the premise of not sacrificing the processing capacity of the system, has simple structure and small building difficulty, reduces the influence between interfaces and improves the stability of the system.

Description

Data processing method and device
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a data processing method and apparatus.
Background
Online systems often face the impact of bursty network traffic, which is sometimes unpredictable. There are three solutions to dealing with bursty traffic at present: the first scheme is to deal with burst flow impact through a current limiting mechanism; the second scheme is to deal with burst flow impact through a fusing mechanism so as to protect the system; and the third scheme is that capacity expansion is carried out based on the number of application examples through Kubernets and other container arrangement technologies so as to improve the processing capacity. Kubernets, among others, is a service orchestration tool that grows for container services.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art:
the first scheme and the second scheme are a passive defense mechanism at the cost of sacrificing the processing capacity of the system; the second scheme is difficult to realize accurate triggering and accurate recovery of fusing; the third scheme has complex structure and high building difficulty.
Disclosure of Invention
In view of this, embodiments of the present invention provide a data processing method and apparatus, which can implement system protection without sacrificing system processing capability, have a simple architecture and a small building difficulty, reduce the influence between interfaces, and improve system stability.
To achieve the above object, according to an aspect of an embodiment of the present invention, there is provided a data processing method.
A method of data processing, comprising: intercepting request data, and updating a queue water level of an interface corresponding to the request data; the queue is used for recording the number of the requests to be processed of the interface, and the queue water level is used for representing the size of the queue capacity occupied by the number of the requests to be processed in the queue; comparing the queue water level of the interface with a preset first threshold and a preset second threshold, and determining the processing mode of the request data according to the comparison result; and processing the request data based on the processing mode of the request data.
Optionally, the intercepting the request data and updating the queue level of the interface corresponding to the request data includes: respectively configuring queues of each interface to record the calling frequency of each interface through the queue water level; and configuring a thread pool corresponding to the interface according to one or more of the category, the importance degree and the calling frequency of the interface.
Optionally, the comparing the queue level of the interface with a preset first threshold and a preset second threshold, and determining the processing mode of the request data according to the comparison result includes: if the queue water level of the interface is smaller than or equal to a first threshold, the processing mode of the request data is a normal mode; if the queue water level of the interface is greater than or equal to a second threshold value, the processing mode of the request data is a thread pool mode; and if the queue water level of the interface is greater than a first threshold and less than a second threshold, not changing the current processing mode of the request data, wherein the current processing mode is a conventional mode or a thread pool mode.
Optionally, the processing the request data based on the processing mode of the request data includes: processing the request data through the thread pool of the interface under the condition that the processing mode of the request data is a thread pool mode; and processing the request data through a local system under the condition that the processing mode of the request data is a normal mode.
Optionally, the processing the request data by the thread pool of the interface includes: sending the interface name and the parameter value of the request data to the thread pool; and obtaining a request result of the request data through the thread pool based on a reflection mechanism of Java.
Optionally, the capacity of the queue, the first threshold and the second threshold are set according to one or more of a category, an importance degree and a calling frequency of the interface.
Optionally, after the processing of the request data is completed, the queue level of the interface corresponding to the request data is updated.
According to another aspect of the embodiments of the present invention, there is provided a data processing apparatus.
A data processing apparatus comprising: the request data interception module is used for intercepting request data and updating the queue water level of an interface corresponding to the request data; the queue is used for recording the number of the requests to be processed of the interface, and the queue water level is used for representing the size of the queue capacity occupied by the number of the requests to be processed in the queue; the processing mode determining module is used for comparing the queue water level of the interface with a preset first threshold and a preset second threshold and determining the processing mode of the request data according to the comparison result; and the request data processing module is used for processing the request data based on the processing mode of the request data.
Optionally, the request data includes a name of an interface to be used and a parameter value, and further includes a configuration module, configured to: respectively configuring queues of each interface to record the calling frequency of each interface through the queue water level; and configuring a thread pool corresponding to the interface according to one or more of the category, the importance degree and the calling frequency of the interface.
Optionally, the processing mode determining module is further configured to: if the queue water level of the interface is smaller than or equal to a first threshold, the processing mode of the request data is a normal mode; if the queue water level of the interface is greater than or equal to a second threshold value, the processing mode of the request data is a thread pool mode; and if the queue water level of the interface is greater than a first threshold and less than a second threshold, not changing the current processing mode of the request data, wherein the current processing mode is a conventional mode or a thread pool mode.
Optionally, the request data processing module is further configured to: processing the request data through the thread pool of the interface under the condition that the processing mode of the request data is a thread pool mode; and processing the request data through a local system under the condition that the processing mode of the request data is a normal mode.
Optionally, the request data processing module is further configured to: sending the interface name and the parameter value of the request data to the thread pool; and obtaining a request result of the request data through the thread pool based on a reflection mechanism of Java.
Optionally, the capacity of the queue, the first threshold and the second threshold are set according to one or more of a category, an importance degree and a calling frequency of the interface.
Optionally, after the processing of the request data is completed, the queue level of the interface corresponding to the request data is updated.
According to yet another aspect of an embodiment of the present invention, an electronic device is provided.
An electronic device, comprising: one or more processors; a memory for storing one or more programs, which when executed by the one or more processors, cause the one or more processors to implement the data processing method provided by the embodiments of the present invention.
According to yet another aspect of an embodiment of the present invention, a computer-readable medium is provided.
A computer-readable medium, on which a computer program is stored, which, when executed by a processor, implements a data processing method provided by an embodiment of the present invention.
One embodiment of the above invention has the following advantages or benefits: intercepting the request data, and updating the queue water level of an interface corresponding to the request data; the queue is used for recording the number of the requests to be processed of the interface, and the queue water level is used for representing the size of the queue capacity occupied by the number of the requests to be processed in the queue; comparing the queue water level of the interface with a preset first threshold and a preset second threshold, and determining the processing mode of the request data according to the comparison result; the request data is processed based on the processing mode of the request data. The system can be protected on the premise of not sacrificing the processing capacity of the system, the framework is simple, the building difficulty is small, the influence between interfaces is reduced, and the stability of the system is improved.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic diagram of the main steps of a data processing method according to one embodiment of the present invention;
FIG. 2 is a flow diagram of a data processing method according to one embodiment of the invention;
FIG. 3 is a schematic diagram of the main blocks of a data processing apparatus according to one embodiment of the present invention;
FIG. 4 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
fig. 5 is a schematic block diagram of a computer system suitable for use in implementing a terminal device or server of an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic diagram of main steps of a data processing method according to an embodiment of the present invention.
As shown in fig. 1, the data processing method according to an embodiment of the present invention mainly includes steps S101 to S103 as follows.
Step S101: and intercepting the request data, and updating the queue water level of the interface corresponding to the request data.
The queue is used for recording the number of the requests to be processed of the interface, and the queue level is used for indicating the size of the queue capacity occupied by the number of the requests to be processed in the queue.
The request data comprises the used interface name and the parameter value, the request data is used for requesting to call the interface corresponding to the interface name, and the interface corresponding to the interface name is the interface corresponding to the request data.
Intercepting the request data, and before updating the queue level of the interface corresponding to the request data, the method may include: respectively configuring queues of each interface to record the calling frequency of each interface through the queue water level; and configuring a thread pool corresponding to the interface according to one or more of the category, the importance degree and the calling frequency of the interface. For example, interfaces of a particular category and/or importance and/or frequency of invocation use separate thread pools, etc., and the particular configuration of thread pools is not limited to this example. It should be noted that, in the embodiment of the present invention, the calling frequency value of the interface does not need to be specifically quantized, but indirectly reflects the frequency value through the queue water level value, so that when the thread pool corresponding to the interface is configured according to the calling frequency, the configuration can be performed by referring to the queue water level value.
The capacity of the queue, the first threshold value and the second threshold value are set according to one or more of the category, the degree of importance, and the calling frequency of the interface.
Step S102: and comparing the queue water level of the interface with a preset first threshold and a preset second threshold, and determining the processing mode of the request data according to the comparison result.
Comparing the queue level of the interface with a preset first threshold and a preset second threshold, and determining a processing mode of the request data according to the comparison result, which may include: if the queue water level of the interface is less than or equal to a first threshold, the processing mode of the request data is a conventional mode; if the queue water level of the interface is greater than or equal to a second threshold value, the processing mode of the request data is a thread pool mode; and if the queue water level of the interface is greater than the first threshold and less than the second threshold, not changing the current processing mode of the request data, wherein the current processing mode is a conventional mode or a thread pool mode.
Step S103: the request data is processed based on the processing mode of the request data.
The processing of the request data based on the processing mode of the request data may include: processing the request data through the thread pool of the interface under the condition that the processing mode of the request data is the thread pool mode; in the case that the processing mode of the requested data is the normal mode, the call is directly synchronized in the AOP (facet oriented programming) so that the requested data is processed by the local system thread. The thread pool of the interface is used for processing request data for calling the interface, and can be configured according to the importance degree, the category and the like of the interface.
Processing the request data through the thread pool of the interface may include: sending the interface name and the parameter value of the request data to a thread pool; based on a reflection mechanism of Java, a request result of requesting data is obtained through a thread pool, namely after a thread in the thread pool obtains a method name and a parameter value of an interface, a corresponding method is called through the reflection mechanism of Java to obtain a return value.
After the processing of the request data is completed, the queue level of the interface corresponding to the request data may be updated.
Fig. 2 is a flow diagram of a data processing method according to an embodiment of the invention.
As shown in fig. 2, the architecture on which the data processing method according to the embodiment of the present invention is based may mainly include: the system comprises an AOP (automatic optic plane) tangent layer, an interface queue (queue for short), a thread pool and a system elastic strategy component. The data processing flow comprises the following steps: step 1, calling an external request interface, and intercepting request data of the interface by an AOP section layer; step 2, obtaining interface parameters; step 3, writing the requested record into a corresponding interface queue; and 4, the system elastic strategy component obtains the returned queue water level from the interface queue to perform expansion or contraction capacity switching control, wherein: if the water level of the queue is less than or equal to the low water level threshold, executing step 5.1, namely triggering the low water level threshold and executing locally and synchronously, and then executing step 6.1, namely returning to the execution result of step 5.1; if the water level of the queue is greater than or equal to the high water level threshold, executing step 5.2, namely triggering the high water level threshold and putting the request into the thread pool, and then executing step 6.2, namely waiting for the execution result of the thread pool; after step 6.1 or step 6.2, step 7 is executed, that is, after the request processing is completed, the record of the request in the corresponding interface queue is deleted; and 8, returning the request result (namely the return value) of the request data.
The AOP section layer is used for uniformly integrating system interface calls, is responsible for intercepting requests of interfaces, simultaneously acquires data such as interface names and parameter values of the requests and updates a queue for recording the interfaces; the interface queue is used for storing the calling condition of each interface, a high water level threshold value and a low water level threshold value can be set in the interface queue, and the number of the requests to be processed of the interfaces can be recorded through the interface queue; the thread pool provides a carrier for multi-interface concurrent call execution during system capacity expansion; the system elasticity strategy component is a switching control component for system capacity expansion or capacity reduction and is used for taking charge of the capacity expansion and capacity reduction of the system, a thread pool is used for processing requests under the capacity expansion condition, and a local thread is used for processing requests under the capacity reduction condition. The AOP is a technology for realizing program functions by a precompilation mode and a dynamic proxy during running, and can be used for isolating all parts of service logic, so that the coupling degree between all parts of the service logic is reduced, the reusability of a program is improved, and the development efficiency is improved; the thread pool is a processing form of multithreading, the thread pool is a set of pre-created threads, the threads can be used repeatedly, tasks are firstly put into queues in the processing process, and then the threads in the thread pool can automatically take out the tasks from the queues for processing.
In one embodiment, the queues of the interfaces are respectively configured to record the calling frequency of each interface through the queue levels, intercept the request data, and update the queue levels of the interfaces corresponding to the request data, where the queues are used to record the number of the requests to be processed of the interfaces, and the queue levels are used to indicate the size of the queue capacity occupied by the number of the requests to be processed in the queues. Specifically, the embodiment of the present invention performs AOP interception on a call request of an interface, obtains an interface name (i.e., a method name of the interface) and a parameter value in the request, and allocates a queue with a fixed capacity to each interface to record the number of requests to be processed of each interface. After the request is intercepted, the record of the request is inserted into the corresponding interface queue, and after the request processing is completed, the record in the corresponding interface queue is deleted.
In one embodiment, comparing the queue level of the interface with a preset first threshold and a preset second threshold, and determining the processing mode of the request data according to the comparison result includes: if the queue water level of the interface is less than or equal to a first threshold, the processing mode of the request data is a conventional mode; if the queue water level of the interface is greater than or equal to a second threshold value, the processing mode of the request data is a thread pool mode; and if the queue water level of the interface is greater than the first threshold and less than the second threshold, not changing the current processing mode of the request data, wherein the current processing mode is a conventional mode or a thread pool mode. Specifically, a high water level threshold (i.e., a second threshold) and a low water level threshold (i.e., a first threshold) are set for each queue in advance, such that the high water level threshold is set to 80% and the low water level threshold is set to 20%. When the water level of the queue reaches or exceeds a high water level threshold value, the calling frequency of the current interface is high, the system pressure is high, the processing mode of the request data is a thread pool mode, the system capacity expansion is triggered, the thread pool is started, and the calling of a plurality of interfaces can be executed concurrently by adopting the thread pool, so that the system processing capacity is greatly increased, and meanwhile, the system is protected. When the queue water level reaches or is lower than the low water level threshold value, the interface calling frequency is low, the system pressure is low, the processing mode of the request data is a conventional mode at the moment, the system capacity reduction is triggered, the thread pool exits, the conventional calling is recovered, namely the request data is directly and synchronously called in the AOP, and the request data is processed through the local thread. And when the queue water level is greater than the low water level threshold and less than the high water level threshold, the current processing mode is used to avoid hard switching between the conventional mode and the thread pool mode, namely, if the queue water level gradually exceeds the low water level threshold, the processing mode of the request data is the conventional mode, and if the queue water level gradually falls below the high water level threshold, the processing mode of the request data is the thread pool mode.
In one embodiment, processing the request data based on the processing mode of the request data includes: processing the request data through the thread pool of the interface under the condition that the processing mode of the request data is the thread pool mode; in the case where the processing mode of the request data is the normal mode, the request data is processed by the local system. Specifically, the interface method name (i.e., interface name) and the parameter value acquired by the AOP are sent to the thread pool as request data, and the thread in the thread pool calls a corresponding method by using a Java reflection mechanism based on the interface method name and the parameter value to obtain a return value (i.e., a request result). The timeout time for the thread pool to process the request data may be set, and if the timeout time is exceeded, it is determined that the request data processing fails, so as to prevent the thread pool from being occupied for a long time.
In one embodiment, a thread pool corresponding to an interface is configured according to one or more of the category, the importance degree and the calling frequency of the interface, and the capacity of a queue, a first threshold and a second threshold are set. Specifically, the interfaces may be classified, for example, according to the call size, according to the importance of the interfaces, and the like. Different interfaces may be configured with different queue sizes, low water level thresholds, and high water level thresholds. And different thread pools are used simultaneously, so that independent configuration and resource isolation of different interfaces can be realized, an independent thread pool can be used for a relatively important interface, and other thread pools are used for relatively minor interfaces, so that the important thread pool cannot be influenced when the thread pool of the minor interface has operating pressure.
Fig. 3 is a schematic diagram of main blocks of a data processing apparatus according to an embodiment of the present invention.
As shown in fig. 3, a data processing apparatus 300 according to an embodiment of the present invention mainly includes: a request data interception module 301, a processing mode determination module 302 and a request data processing module 303.
A request data interception module 301, configured to intercept request data and update a queue level of an interface corresponding to the request data; the queue is used for recording the number of the requests to be processed of the interface, and the queue level is used for indicating the size of the queue capacity occupied by the number of the requests to be processed in the queue.
A processing mode determining module 302, configured to compare the queue level of the interface with a preset first threshold and a preset second threshold, and determine a processing mode of the request data according to a comparison result.
And a request data processing module 303, configured to process the request data based on a processing mode of the request data.
In one embodiment, the request data may include a name of an interface to be used and a parameter value, and may further include a configuration module to: respectively configuring queues of each interface to record the calling frequency of each interface through the queue water level; and configuring a thread pool corresponding to the interface according to one or more of the category, the importance degree and the calling frequency of the interface.
In one embodiment, the processing mode determining module 302 is specifically configured to: if the queue water level of the interface is less than or equal to a first threshold, the processing mode of the request data is a conventional mode; if the queue water level of the interface is greater than or equal to a second threshold value, the processing mode of the request data is a thread pool mode; and if the queue water level of the interface is greater than the first threshold and less than the second threshold, not changing the current processing mode of the request data, wherein the current processing mode is a conventional mode or a thread pool mode.
In one embodiment, the request data processing module 303 is specifically configured to: processing the request data through the thread pool of the interface under the condition that the processing mode of the request data is the thread pool mode; in the case where the processing mode of the request data is the normal mode, the request data is processed by the local system.
In one embodiment, the request data processing module 303 is specifically configured to: sending the interface name and the parameter value of the request data to a thread pool; and obtaining a request result of the request data through the thread pool based on a Java reflection mechanism.
In one embodiment, the capacity of the queue, the first threshold and the second threshold may be set according to one or more of a category, an importance level, and a call frequency of the interface.
In one embodiment, after processing of the request data is completed, the queue level of the interface corresponding to the request data is updated.
In addition, the detailed implementation of the data processing apparatus in the embodiment of the present invention has been described in detail in the above data processing method, and therefore, the repeated content will not be described again.
Fig. 4 shows an exemplary system architecture 400 of a data processing method or data processing apparatus to which embodiments of the present invention may be applied.
As shown in fig. 4, the system architecture 400 may include terminal devices 401, 402, 403, a network 404, and a server 405. The network 404 serves as a medium for providing communication links between the terminal devices 401, 402, 403 and the server 405. Network 404 may include various types of connections, such as wire, wireless communication links, or fiber optic cables, to name a few.
A user may use terminal devices 401, 402, 403 to interact with a server 405 over a network 404 to receive or send messages or the like. The terminal devices 401, 402, 403 may have installed thereon various communication client applications, such as shopping-like applications, web browser applications, search-like applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).
The terminal devices 401, 402, 403 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 405 may be a server providing various services, such as a background management server (for example only) providing support for shopping websites browsed by users using the terminal devices 401, 402, 403. The backend management server may analyze and perform other processing on the received data such as the product information query request, and feed back a processing result (for example, target push information, product information — just an example) to the terminal device.
It should be noted that the data processing method provided by the embodiment of the present invention is generally executed by the server 405, and accordingly, the data processing apparatus is generally disposed in the server 405.
It should be understood that the number of terminal devices, networks, and servers in fig. 4 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 5, a block diagram of a computer system 500 suitable for use with a terminal device or server implementing an embodiment of the invention is shown. The terminal device or the server shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 5, the computer system 500 includes a Central Processing Unit (CPU)501 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the system 500 are also stored. The CPU 501, ROM 502, and RAM 503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
The following components are connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, and the like; an output portion 507 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The driver 510 is also connected to the I/O interface 505 as necessary. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as necessary, so that a computer program read out therefrom is mounted into the storage section 508 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 509, and/or installed from the removable medium 511. The computer program performs the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 501.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor comprises a request data interception module, a processing mode determination module and a request data processing module. The names of these modules do not constitute a limitation to the modules themselves in some cases, for example, the request data interception module may also be described as a "module for intercepting request data and updating the queue level of the interface corresponding to the request data".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise: intercepting the request data, and updating the queue water level of an interface corresponding to the request data; the queue is used for recording the number of the requests to be processed of the interface, and the queue water level is used for representing the size of the queue capacity occupied by the number of the requests to be processed in the queue; comparing the queue water level of the interface with a preset first threshold and a preset second threshold, and determining the processing mode of the request data according to the comparison result; the request data is processed based on the processing mode of the request data.
According to the technical scheme of the embodiment of the invention, request data are intercepted, and the queue water level of an interface corresponding to the request data is updated; the queue is used for recording the number of the requests to be processed of the interface, and the queue water level is used for representing the size of the queue capacity occupied by the number of the requests to be processed in the queue; comparing the queue water level of the interface with a preset first threshold and a preset second threshold, and determining the processing mode of the request data according to the comparison result; the request data is processed based on the processing mode of the request data. The system can be protected on the premise of not sacrificing the processing capacity of the system, the framework is simple, the building difficulty is small, the influence between interfaces is reduced, and the stability of the system is improved.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (16)

1. A data processing method, comprising:
intercepting request data, and updating a queue water level of an interface corresponding to the request data; the queue is used for recording the number of the requests to be processed of the interface, and the queue water level is used for representing the size of the queue capacity occupied by the number of the requests to be processed in the queue;
comparing the queue water level of the interface with a preset first threshold and a preset second threshold, and determining the processing mode of the request data according to the comparison result;
and processing the request data based on the processing mode of the request data.
2. The method of claim 1, wherein the request data includes a name and a parameter value of an interface to be used, and before intercepting the request data and updating a queue level of the interface corresponding to the request data, the method includes:
respectively configuring queues of each interface to record the calling frequency of each interface through the queue water level;
and configuring a thread pool corresponding to the interface according to one or more of the category, the importance degree and the calling frequency of the interface.
3. The method according to claim 2, wherein the comparing the queue level of the interface with a preset first threshold and a preset second threshold, and determining the processing mode of the request data according to the comparison result comprises:
if the queue water level of the interface is smaller than or equal to a first threshold, the processing mode of the request data is a normal mode;
if the queue water level of the interface is greater than or equal to a second threshold value, the processing mode of the request data is a thread pool mode;
and if the queue water level of the interface is greater than a first threshold and less than a second threshold, not changing the current processing mode of the request data, wherein the current processing mode is a conventional mode or a thread pool mode.
4. The method of claim 3, wherein the processing the request data based on the processing mode of the request data comprises:
processing the request data through the thread pool of the interface under the condition that the processing mode of the request data is a thread pool mode;
and processing the request data through a local system under the condition that the processing mode of the request data is a normal mode.
5. The method of claim 4, wherein processing the request data through the thread pool of the interface comprises:
sending the interface name and the parameter value of the request data to the thread pool;
and obtaining a request result of the request data through the thread pool based on a reflection mechanism of Java.
6. The method of claim 1, wherein the capacity of the queue, the first threshold, and the second threshold are set according to one or more of a category, a degree of importance, and a frequency of invocation of the interface.
7. The method of claim 1, wherein a queue level of an interface corresponding to the request data is updated after processing of the request data is completed.
8. A data processing apparatus, comprising:
the request data interception module is used for intercepting request data and updating the queue water level of an interface corresponding to the request data; the queue is used for recording the number of the requests to be processed of the interface, and the queue water level is used for representing the size of the queue capacity occupied by the number of the requests to be processed in the queue;
the processing mode determining module is used for comparing the queue water level of the interface with a preset first threshold and a preset second threshold and determining the processing mode of the request data according to the comparison result;
and the request data processing module is used for processing the request data based on the processing mode of the request data.
9. The apparatus of claim 8, wherein the request data comprises a name of an interface to be used and a parameter value, and further comprising a configuration module configured to:
respectively configuring queues of each interface to record the calling frequency of each interface through the queue water level;
and configuring a thread pool corresponding to the interface according to one or more of the category, the importance degree and the calling frequency of the interface.
10. The apparatus of claim 9, wherein the processing mode determination module is further configured to:
if the queue water level of the interface is smaller than or equal to a first threshold, the processing mode of the request data is a normal mode;
if the queue water level of the interface is greater than or equal to a second threshold value, the processing mode of the request data is a thread pool mode;
and if the queue water level of the interface is greater than a first threshold and less than a second threshold, not changing the current processing mode of the request data, wherein the current processing mode is a conventional mode or a thread pool mode.
11. The apparatus of claim 10, wherein the request data processing module is further configured to:
processing the request data through the thread pool of the interface under the condition that the processing mode of the request data is a thread pool mode;
and processing the request data through a local system under the condition that the processing mode of the request data is a normal mode.
12. The apparatus of claim 11, wherein the request data processing module is further configured to:
sending the interface name and the parameter value of the request data to the thread pool;
and obtaining a request result of the request data through the thread pool based on a reflection mechanism of Java.
13. The apparatus of claim 8, wherein the capacity of the queue, the first threshold, and the second threshold are set according to one or more of a category, a degree of importance, and a frequency of invocation of the interface.
14. The apparatus of claim 8, wherein a queue level of an interface corresponding to the request data is updated after processing of the request data is completed.
15. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
16. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202210005424.1A 2022-01-04 2022-01-04 Data processing method and device Pending CN114374657A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210005424.1A CN114374657A (en) 2022-01-04 2022-01-04 Data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210005424.1A CN114374657A (en) 2022-01-04 2022-01-04 Data processing method and device

Publications (1)

Publication Number Publication Date
CN114374657A true CN114374657A (en) 2022-04-19

Family

ID=81142812

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210005424.1A Pending CN114374657A (en) 2022-01-04 2022-01-04 Data processing method and device

Country Status (1)

Country Link
CN (1) CN114374657A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115225577A (en) * 2022-09-20 2022-10-21 深圳市明源云科技有限公司 Data processing control method and device, electronic equipment and readable storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030233485A1 (en) * 2002-06-13 2003-12-18 Mircrosoft Corporation Event queue
US20040010551A1 (en) * 2002-07-11 2004-01-15 Chia-Chu Dorland Method and apparatus for automated network polling
US20100082856A1 (en) * 2008-06-11 2010-04-01 Kimoto Christian A Managing Command Request Time-outs In QOS Priority Queues
US8769550B1 (en) * 2012-10-24 2014-07-01 Sprint Communications Company L.P. Reply queue management
CN107220033A (en) * 2017-07-05 2017-09-29 百度在线网络技术(北京)有限公司 Method and apparatus for controlling thread pool thread quantity
CN107341050A (en) * 2016-04-28 2017-11-10 北京京东尚科信息技术有限公司 Service processing method and device based on dynamic thread pool
CN109450803A (en) * 2018-09-11 2019-03-08 广东神马搜索科技有限公司 Traffic scheduling method, device and system
US20200310869A1 (en) * 2019-03-28 2020-10-01 Fujitsu Limited Information processing apparatus and storage medium storing execution control program
CN112559173A (en) * 2020-12-07 2021-03-26 北京知道创宇信息技术股份有限公司 Resource adjusting method and device, electronic equipment and readable storage medium
CN113360266A (en) * 2021-06-23 2021-09-07 北京百度网讯科技有限公司 Task processing method and device
CN113467933A (en) * 2021-06-15 2021-10-01 济南浪潮数据技术有限公司 Thread pool optimization method, system, terminal and storage medium for distributed file system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030233485A1 (en) * 2002-06-13 2003-12-18 Mircrosoft Corporation Event queue
US20040010551A1 (en) * 2002-07-11 2004-01-15 Chia-Chu Dorland Method and apparatus for automated network polling
US20100082856A1 (en) * 2008-06-11 2010-04-01 Kimoto Christian A Managing Command Request Time-outs In QOS Priority Queues
US8769550B1 (en) * 2012-10-24 2014-07-01 Sprint Communications Company L.P. Reply queue management
CN107341050A (en) * 2016-04-28 2017-11-10 北京京东尚科信息技术有限公司 Service processing method and device based on dynamic thread pool
CN107220033A (en) * 2017-07-05 2017-09-29 百度在线网络技术(北京)有限公司 Method and apparatus for controlling thread pool thread quantity
CN109450803A (en) * 2018-09-11 2019-03-08 广东神马搜索科技有限公司 Traffic scheduling method, device and system
US20200310869A1 (en) * 2019-03-28 2020-10-01 Fujitsu Limited Information processing apparatus and storage medium storing execution control program
CN112559173A (en) * 2020-12-07 2021-03-26 北京知道创宇信息技术股份有限公司 Resource adjusting method and device, electronic equipment and readable storage medium
CN113467933A (en) * 2021-06-15 2021-10-01 济南浪潮数据技术有限公司 Thread pool optimization method, system, terminal and storage medium for distributed file system
CN113360266A (en) * 2021-06-23 2021-09-07 北京百度网讯科技有限公司 Task processing method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115225577A (en) * 2022-09-20 2022-10-21 深圳市明源云科技有限公司 Data processing control method and device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN112650576B (en) Resource scheduling method, device, equipment, storage medium and computer program product
CN113517985B (en) File data processing method and device, electronic equipment and computer readable medium
CN110851276A (en) Service request processing method, device, server and storage medium
CN113765818A (en) Distributed current limiting method, device, equipment, storage medium and system
CN111200606A (en) Deep learning model task processing method, system, server and storage medium
CN109428926B (en) Method and device for scheduling task nodes
CN113238861A (en) Task execution method and device
CN115904761A (en) System on chip, vehicle and video processing unit virtualization method
CN112835632A (en) Method and device for calling end capability and computer storage medium
CN111290842A (en) Task execution method and device
CN114374657A (en) Data processing method and device
CN113742389A (en) Service processing method and device
CN116541167A (en) System flow control method, device, electronic equipment and computer readable medium
CN113360815A (en) Request retry method and device
CN109284177B (en) Data updating method and device
CN115525411A (en) Method, device, electronic equipment and computer readable medium for processing service request
CN114327404A (en) File processing method and device, electronic equipment and computer readable medium
CN113626176A (en) Service request processing method and device
CN113886082A (en) Request processing method and device, computing equipment and medium
CN113726885A (en) Method and device for adjusting flow quota
CN113779122A (en) Method and apparatus for exporting data
CN109120692B (en) Method and apparatus for processing requests
CN113760487A (en) Service processing method and device
CN111930696A (en) File transmission processing method and system based on small program
CN113765871A (en) Fortress management method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination