CN115277859B - Request scheduling method, device, equipment and medium - Google Patents

Request scheduling method, device, equipment and medium Download PDF

Info

Publication number
CN115277859B
CN115277859B CN202211189676.0A CN202211189676A CN115277859B CN 115277859 B CN115277859 B CN 115277859B CN 202211189676 A CN202211189676 A CN 202211189676A CN 115277859 B CN115277859 B CN 115277859B
Authority
CN
China
Prior art keywords
client
queue
request
array
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211189676.0A
Other languages
Chinese (zh)
Other versions
CN115277859A (en
Inventor
李秋军
阳小鲜
贺锟
张旭敏
肖海宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huarui Distributed Beijing Technology Co ltd
Shenzhen Huarui Distributed Technology Co ltd
Original Assignee
Huarui Distributed Beijing Technology Co ltd
Shenzhen Huarui Distributed Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huarui Distributed Beijing Technology Co ltd, Shenzhen Huarui Distributed Technology Co ltd filed Critical Huarui Distributed Beijing Technology Co ltd
Priority to CN202211189676.0A priority Critical patent/CN115277859B/en
Publication of CN115277859A publication Critical patent/CN115277859A/en
Application granted granted Critical
Publication of CN115277859B publication Critical patent/CN115277859B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • H04L47/6225Fixed service order, e.g. Round Robin

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to the technical field of Internet, and provides a request scheduling method, a device, equipment and a medium, wherein the method comprises the following steps: when the request scheduling is carried out, sequentially traversing each client in the client array; when the current client is traversed, polling each element in the global array corresponding to the current client until a target index queue corresponding to the element is polled, determining a request queue pointed by the target element at the head of the queue as a target queue, stopping polling the current client, acquiring a request from the target queue, adding the request to a queue to be executed, continuously traversing the next client, and executing each request in the queue to be executed concurrently in the traversing process. The invention can perform traversal polling on the requests based on the constructed client array, the request queue and the global array, so that different request types can be treated without differentiation when the server schedules the requests, and the invention has higher request concurrency, request execution throughput and processing efficiency.

Description

Request scheduling method, device, equipment and medium
Technical Field
The present invention relates to the field of internet technologies, and in particular, to a request scheduling method, apparatus, device, and medium.
Background
In the existing industries of securities and the like, when a plurality of clients send a large number of query and playback requests to a server at the same time, different playback requests and query requests have different requirements on resource occupation and concurrency capacity, are limited by factors such as the self performance of each server, and cannot effectively and timely perform balanced processing on the resource and concurrency performance of the received requests.
Therefore, the concurrency, throughput and processing efficiency of the server when scheduling different types of requests of the client are still to be further improved.
Disclosure of Invention
In view of the foregoing, it is necessary to provide a request scheduling method, apparatus, device and medium, which are intended to solve the problems of low concurrency, low throughput and low processing efficiency when a server schedules a request.
A request scheduling method is applied to a server side and comprises the following steps:
constructing a client array, and constructing a request queue with different weight values for each client in the client array;
constructing a global array corresponding to each client, wherein the size of the global array corresponding to each client is 5, each element in the global array corresponding to each client corresponds to an index queue, and the element in each index queue corresponds to the weight value of the request queue;
when the request scheduling is carried out, sequentially traversing each client in the client array;
in the traversing process, determining a client traversed for the next time as a current client, continuously polling each element in a global array corresponding to the current client from the last polled element when the current client is traversed until an index queue corresponding to an element is not empty, determining an index queue which is not empty as a target index queue, acquiring an element at the head of the queue in the target index queue as a target element, determining a request queue pointed by the target element as a target queue, stopping polling the global array corresponding to the current client, acquiring a configured number of requests from the target queue, adding the requests to a queue to be executed, and continuously traversing a next client adjacent to the current client in the client array;
and in the traversing process, executing each request in the queue to be executed concurrently.
According to the preferred embodiment of the present invention, the constructing the client array comprises:
when detecting that a client accesses the server, acquiring a client identifier of each accessed client;
and determining the client identifier of each client as an element, and forming an array according to the access sequence of each client to obtain the client array.
According to the preferred embodiment of the present invention, after the client array is constructed, the method further comprises:
when the fact that the client is disconnected with the server is detected, the disconnected client is determined as a target client;
deleting the client identifier of the target client from the client array, sequentially filling the client identifiers of other clients behind the target client in the client array forwards, and performing zero filling on the last element in the client array;
when detecting that a new client is accessed to the server, acquiring a client identifier of the new client;
and replacing the element value of the last element by using the acquired client identification.
According to a preferred embodiment of the present invention, the constructing a request queue with different weight values for each client in the client array includes:
adding a request with the weight value of 1 to the queue to obtain a first sub-queue of each client;
adding the request with the weight value of 2 to the queue to obtain a second sub-queue of each client;
adding the request with the weight value of 3 to the queue to obtain a third sub-queue of each client;
adding the request with the weight value of 5 to the queue to obtain a fourth sub-queue of each client;
and determining a first sub-queue, a second sub-queue, a third sub-queue and a fourth sub-queue corresponding to each client as a request queue corresponding to each client.
According to the preferred embodiment of the present invention, after obtaining the configured number of requests from the target queue and adding the requests to the queue to be executed, the method further includes:
acquiring an array subscript of the target index queue;
acquiring a weight value of the target queue;
calculating the sum of the array subscript and the weight value to obtain a first numerical value;
performing a remainder operation on the 5 by using the first numerical value to obtain a current array subscript;
acquiring an index queue corresponding to the subscript of the current array as a current index queue;
and adding the weighted value of the target queue as an element to the tail of the current index queue, and deleting the element corresponding to the weighted value of the target queue from the target index queue.
According to the preferred embodiment of the present invention, when a configured number of requests are obtained from the target queue and added to a queue to be executed, the method further includes:
when the number of the requests in the target queue is smaller than the configuration number, all the requests in the target queue are added to the queue to be executed.
According to a preferred embodiment of the present invention, said concurrently executing each request in the queue to be executed comprises:
constructing threads with a configured quantity, and concurrently executing each request in the queue to be executed by utilizing the threads;
the configuration number is 64, and the size of the queue to be executed is 3 times of the number of concurrent requests per second.
A request scheduling apparatus, operating on a server, the request scheduling apparatus comprising:
the system comprises a construction unit, a data processing unit and a data processing unit, wherein the construction unit is used for constructing a client array and constructing a request queue with different weight values for each client in the client array;
the building unit is further configured to build a global array corresponding to each client, where the size of the global array corresponding to each client is 5, each element in the global array corresponding to each client corresponds to one index queue, and an element in each index queue corresponds to a weight value of the request queue;
the traversing unit is used for sequentially traversing each client in the client array when the request scheduling is carried out;
the traversing unit is further configured to determine a client traversed this time as a current client, and when traversing the current client, continue to poll each element in the global array corresponding to the current client from an element polled last time until an index queue corresponding to an element is not empty, determine an index queue that is not empty as a target index queue, acquire an element at the head of a queue in the target index queue as a target element, determine a request queue pointed by the target element as a target queue, stop polling the global array corresponding to the current client, acquire a configuration number of requests from the target queue, add the requests to a queue to be executed, and continue to traverse a next client adjacent to the current client in the client array;
and the execution unit is used for concurrently executing each request in the queue to be executed in the traversal process.
A computer device, the computer device comprising:
a memory storing at least one instruction; and
a processor executing instructions stored in the memory to implement the request scheduling method.
A computer-readable storage medium having stored therein at least one instruction for execution by a processor in a computer device to implement the request scheduling method.
According to the technical scheme, the request can be subjected to traversal polling on the basis of the constructed client array, the request queue and the global array, so that the query request and the playback request do not need to be distinguished when the server schedules the request, different request types are treated without differentiation, different types of system resources do not need to be considered for balancing, automatic resource balancing is realized, the client request concurrency, the request execution throughput and the processing efficiency are higher, and the client concurrency experience is indirectly improved.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of a request scheduling method according to the present invention.
FIG. 2 is an exemplary diagram of a global array of the present invention.
Fig. 3 is an exemplary schematic diagram of a new global array obtained after the global array in fig. 2 is subjected to one-time scheduling.
Fig. 4 is a functional block diagram of a preferred embodiment of the request scheduling apparatus of the present invention.
FIG. 5 is a schematic structural diagram of a computer device according to a preferred embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a flow chart of a request scheduling method according to a preferred embodiment of the present invention. The order of the steps in the flow chart may be changed and some steps may be omitted according to different needs.
The request scheduling method is applied to one or more computer devices, where the computer device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and hardware of the computer device includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The computer device may be any electronic product capable of performing human-computer interaction with a user, for example, a Personal computer, a tablet computer, a smart phone, a Personal Digital Assistant (PDA), a game machine, an interactive web Television (IPTV), an intelligent wearable device, and the like.
The computer device may also include a network device and/or a user device. Wherein the network device includes, but is not limited to, a single network server, a server group consisting of a plurality of network servers, or a Cloud Computing (Cloud Computing) based Cloud consisting of a large number of hosts or network servers.
The server may be an independent server, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a web service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), and a big data and artificial intelligence platform.
Among them, artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
The artificial intelligence base technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
The Network in which the computer device is located includes, but is not limited to, the internet, a wide area Network, a metropolitan area Network, a local area Network, a Virtual Private Network (VPN), and the like.
In this embodiment, the request scheduling method is applied to a server, and includes:
s10, constructing a client array, and constructing a request queue with different weight values for each client in the client array.
In this embodiment, the constructing the client array includes:
when detecting that a client accesses the server, acquiring a client identifier of each accessed client;
and determining the client identifier of each client as an element, and forming an array according to the access sequence of each client to obtain the client array.
For example: the client array may be represented as [1,2,3] when the client identification of client A is 1, the client identification of client B is 2, and the client identification of client C is 3.
In this embodiment, after the client array is constructed, the method further includes:
when detecting that the client is disconnected with the server, determining the disconnected client as a target client;
deleting the client identifier of the target client from the client array, sequentially filling the client identifiers of other clients behind the target client in the client array forwards, and performing zero filling processing on the last element in the client array;
when detecting that a new client is accessed to the server, acquiring a client identifier of the new client;
and replacing the element value of the last element by the acquired client identification.
For example: for the client side arrays [1,2,3], when the client side corresponding to the element 2 in the client side arrays [1,2,3] is disconnected with the server side, the element 2 is deleted from the client side arrays [1,2,3], the element 3 in the client side arrays [1,2,3] is filled forwards, and zero filling processing is carried out on the last element to obtain [1, 3,0]; when a new client D accesses the server and the client ID of the client D is 4, replacing the element 0 in the array [1, 3,0] with the element 4 to obtain a new client array [1, 3,4].
In this embodiment, the constructing a request queue with a different weight value for each client in the client array includes:
adding a request with the weight value of 1 to the queue to obtain a first sub-queue of each client;
adding the request with the weight value of 2 to the queue to obtain a second sub-queue of each client;
adding the request with the weight value of 3 to the queue to obtain a third sub-queue of each client;
adding the request with the weight value of 5 to the queue to obtain a fourth sub-queue of each client;
and determining a first sub-queue, a second sub-queue, a third sub-queue and a fourth sub-queue corresponding to each client as a request queue corresponding to each client.
The queue size of the first sub queue, the second sub queue, the third sub queue, and the fourth sub queue corresponding to each client may be configured in a user-defined manner according to actual requirements, an operating environment, and the like, for example, the queue size of the first sub queue, the second sub queue, the third sub queue, and the fourth sub queue corresponding to each client may be configured to 1000, that is, the first sub queue, the second sub queue, the third sub queue, and the fourth sub queue corresponding to each client may store 1000 requests at most.
S11, constructing a global array corresponding to each client, wherein the size of the global array corresponding to each client is 5, each element in the global array corresponding to each client corresponds to one index queue, and the element in each index queue corresponds to the weight value of the request queue.
FIG. 2 is a schematic diagram illustrating an example of a global array according to the present invention. The number of the starting end of the arrow indicates each element in the global array, the arrow points to the index queue corresponding to each element, and the number in each index queue indicates the weight value of the corresponding request queue.
And S12, traversing each client in the client array in sequence when the request scheduling is carried out.
For example: for the client side arrays [1,2,3], the client side corresponding to the element 1 is traversed firstly, then the client side corresponding to the element 2 is traversed, and finally the client side corresponding to the element 3 is traversed.
And S13, in the traversing process, determining the client traversed for the next time as the current client, continuously polling each element in the global array corresponding to the current client from the last polled element until the index queue corresponding to the element is not empty, determining the index queue which is not empty as a target index queue, acquiring the element at the head of the queue in the target index queue as a target element, determining the request queue pointed by the target element as a target queue, stopping polling the global array corresponding to the current client, acquiring requests of configuration quantity from the target queue, adding the requests to a queue to be executed, and continuously traversing the next client adjacent to the current client in the client array.
The embodiment continuously polls each element in the global array corresponding to the current client from the last polled element, and avoids the polling efficiency from being influenced by starting polling from the first element every time.
In addition, the embodiment continuously traverses until the server exits, so that the client polls continuously, so that different request types are effectively treated without differentiation in real time (for example, query request types and playback request types are not distinguished, and only the query request types are inserted into corresponding request queues through weights), system resources of different types do not need to be considered for balancing, automatic resource balancing is achieved, the client request concurrency, the request execution throughput and the processing efficiency are higher, and the concurrency experience of the client is indirectly improved.
In this embodiment, after obtaining the configured number of requests from the target queue and adding the requests to the queue to be executed, the method further includes:
acquiring an array subscript of the target index queue;
acquiring a weight value of the target queue;
calculating the sum of the array subscript and the weight value to obtain a first numerical value;
carrying out a remainder operation on the 5 by utilizing the first numerical value to obtain a current array subscript;
acquiring an index queue corresponding to the current array subscript as a current index queue;
and adding the weighted value of the target queue as an element to the tail of the current index queue, and deleting the element corresponding to the weighted value of the target queue from the target index queue.
Fig. 3 is a schematic diagram illustrating an example of a new global array obtained after the global array in fig. 2 is scheduled once according to the present invention. In fig. 2, the array subscript of the target index queue is 0, the weight value of the target queue is 1, after primary scheduling is performed, the sum of the array subscript 0 and the weight value 1 is calculated to obtain a first value 1, a remainder operation is performed on 5 using the first value 1 to obtain a current array subscript 1, the weight value 1 of the target queue is used as an element to be added to the tail of the index queue corresponding to the current array subscript 1, and the element corresponding to the weight value 1 is deleted from the target index queue, so that a new global array in fig. 3 is obtained.
In this embodiment, when a configured number of requests are obtained from the target queue and added to a queue to be executed, the method further includes:
when the number of the requests in the target queue is smaller than the configuration number, all the requests in the target queue are added to the queue to be executed.
Wherein, the configuration number can be customized and configured, such as 50.
In the above embodiment, when the number of requests in the target queue does not reach the configured number, all the requests actually stored in the target queue are added to the queue to be executed, so as to maximize resource utilization.
And S14, concurrently executing each request in the queue to be executed in the traversing process.
In this embodiment, the concurrently executing each request in the queue to be executed includes:
constructing threads with a configured quantity, and concurrently executing each request in the queue to be executed by utilizing the threads;
the configuration number is 64, and the size of the queue to be executed is 3 times of the number of concurrent requests per second.
The configuration number 64 may be obtained by performing a test according to a tool such as a performance pressure test.
For example: under the same query condition, 8, 32, 64 and 128 threads are respectively used for executing the query operation concurrently, and it can be seen that when the database query is executed concurrently by using 64 threads, the concurrency capability is best, and the configuration number is determined to be 64.
Similarly, it can be understood that if the queue to be executed is too large, the experience that the client requests for response concurrently is not facilitated, and the client subsequently connected to the server may wait too long to start acquiring data, and the experience that the server receives more and less concurrent requests from the client is different. If the queue is too small, the 64 threads are idle after being executed, and because the scheduling thread has not yet had time to put the request task into the queue to be executed, the server resources are not fully utilized, and the concurrency performance is not fully exerted. If the tested number of concurrent requests per second for concurrently executing the request task under 64 threads is about 1000, the size of the queue to be executed can be configured to be 3 times of the number of concurrent requests per second so as to fully utilize 64 threads, in order to avoid the waiting of 64 threads during the concurrent execution and reduce the influence on the concurrent experience.
In this embodiment, when each request in the queue to be executed is executed concurrently, data may be obtained from the distributed database according to the request content of each request.
Further, after the data is acquired, the acquired data can be analyzed and converted according to the requirement of each client for the data format, the data result is converted into a protocol packet format agreed with the client, and the protocol packet format is issued to the client.
According to the technical scheme, the request can be subjected to traversal polling based on the constructed client array, the request queue and the global array, so that the query request and the playback request do not need to be distinguished when the server schedules the request, different request types are treated without differentiation, different types of system resources do not need to be considered for balancing, automatic resource balancing is realized, the client request concurrency, the request execution throughput and the processing efficiency are higher, and the client concurrency experience is indirectly improved.
Fig. 4 is a functional block diagram of a preferred embodiment of the request scheduling device according to the present invention. The request scheduling device 11 includes a building unit 110, a traversing unit 111, and an executing unit 112. A module/unit as referred to herein is a series of computer program segments stored in a memory that can be executed by a processor and that can perform a fixed function. In the present embodiment, the functions of the modules/units will be described in detail in the following embodiments.
In this embodiment, the request scheduling apparatus 11 operates on a server side, and includes:
the constructing unit 110 is configured to construct a client array, and construct a request queue with different weight values for each client in the client array.
In this embodiment, the constructing unit 110 constructs the client array including:
when detecting that a client accesses the server, acquiring a client identifier of each accessed client;
and determining the client identifier of each client as an element, and forming an array according to the access sequence of each client to obtain the client array.
For example: the client array may be represented as [1,2,3] when the client identification of client A is 1, the client identification of client B is 2, and the client identification of client C is 3.
In this embodiment, after the client array is constructed, when it is detected that the client is disconnected from the server, the disconnected client is determined as a target client;
deleting the client identifier of the target client from the client array, sequentially filling the client identifiers of other clients behind the target client in the client array forwards, and performing zero filling on the last element in the client array;
when detecting that a new client is accessed to the server, acquiring a client identifier of the new client;
and replacing the element value of the last element by the acquired client identification.
For example: for the client side arrays [1,2,3], when the client side corresponding to the element 2 in the client side arrays [1,2,3] is disconnected with the server side, the element 2 is deleted from the client side arrays [1,2,3], the element 3 in the client side arrays [1,2,3] is filled forwards, and zero filling processing is carried out on the last element to obtain [1, 3,0]; and when a new client D accesses the server and the client identification of the client D is 4, replacing the element 0 in the array [1, 3,0] with the element 4 to obtain a new client array [1, 3,4].
In this embodiment, the constructing unit 110 constructs a request queue with a different weight value for each client in the client array, including:
adding the request with the weight value of 1 to the queue to obtain a first sub-queue of each client;
adding the request with the weight value of 2 to the queue to obtain a second sub-queue of each client;
adding the request with the weight value of 3 to the queue to obtain a third sub-queue of each client;
adding the request with the weight value of 5 to the queue to obtain a fourth sub-queue of each client;
and determining a first sub-queue, a second sub-queue, a third sub-queue and a fourth sub-queue corresponding to each client as a request queue corresponding to each client.
The queue size of the first sub queue, the second sub queue, the third sub queue, and the fourth sub queue corresponding to each client may be configured in a user-defined manner according to actual requirements, an operating environment, and the like, for example, the queue size of the first sub queue, the second sub queue, the third sub queue, and the fourth sub queue corresponding to each client may be configured to 1000, that is, the first sub queue, the second sub queue, the third sub queue, and the fourth sub queue corresponding to each client may store 1000 requests at most.
The constructing unit 110 is further configured to construct a global array corresponding to each client, where the size of the global array corresponding to each client is 5, each element in the global array corresponding to each client corresponds to an index queue, and the element in each index queue corresponds to a weight value of the request queue.
FIG. 2 is a schematic diagram illustrating an example of a global array according to the present invention. The number of the start end of the arrow represents each element in the global array, the arrow points to the index queue corresponding to each element, and the number in each index queue represents the weight value of the corresponding request queue.
The traversing unit 111 is configured to sequentially traverse each client in the client array when performing request scheduling.
For example: for the client array [1,2,3], the client corresponding to the element 1 is traversed first, then the client corresponding to the element 2 is traversed, and finally the client corresponding to the element 3 is traversed.
The traversing unit 111 is further configured to determine a client traversed this time as a current client, and when traversing the current client, continue to poll each element in the global array corresponding to the current client from an element polled last time until an index queue corresponding to an element is not empty, determine an index queue that is not empty as a target index queue, acquire an element at a head of a queue in the target index queue as a target element, determine a request queue pointed by the target element as a target queue, stop polling the global array corresponding to the current client, acquire a configuration number of requests from the target queue, add the requests to a queue to be executed, and continue to traverse a next client adjacent to the current client in the client array.
The embodiment continuously polls each element in the global array corresponding to the current client from the last polled element, and avoids the polling efficiency from being influenced by starting polling from the first element every time.
In addition, the embodiment continuously traverses until the server exits, so that the polling of the client is continuously performed, different request types are effectively treated in real time without differentiation (for example, query request types and playback request types are not distinguished, and only the query request types and the playback request types are inserted into corresponding request queues through weights), different types of system resource balance does not need to be considered, automatic resource balance is achieved, higher client request concurrency, request execution throughput and processing efficiency are achieved, and the concurrency experience of the client is indirectly improved.
In this embodiment, after a configured number of requests are obtained from the target queue and added to the queue to be executed, the array subscript of the target index queue is obtained;
acquiring a weight value of the target queue;
calculating the sum of the array subscript and the weight value to obtain a first numerical value;
carrying out a remainder operation on the 5 by utilizing the first numerical value to obtain a current array subscript;
acquiring an index queue corresponding to the subscript of the current array as a current index queue;
and adding the weighted value of the target queue as an element to the tail of the current index queue, and deleting the element corresponding to the weighted value of the target queue from the target index queue.
Fig. 3 is a schematic diagram illustrating an example of a new global array obtained after the global array in fig. 2 is scheduled once according to the present invention. In fig. 2, the array subscript of the target index queue is 0, the weight value of the target queue is 1, after primary scheduling is performed, the sum of the array subscript 0 and the weight value 1 is calculated to obtain a first value 1, a remainder operation is performed on 5 by using the first value 1 to obtain a current array subscript 1, the weight value 1 of the target queue is used as an element to be added to the tail of the index queue corresponding to the current array subscript 1, and the element corresponding to the weight value 1 is deleted from the target index queue to obtain a new global array in fig. 3.
In this embodiment, when a configured number of requests obtained from the target queue are added to a queue to be executed, and when the number of requests in the target queue is smaller than the configured number, all requests in the target queue are added to the queue to be executed.
Wherein, the configuration number can be customized and configured, such as 50.
In the above embodiment, when the number of requests in the target queue does not reach the configured number, all requests actually stored in the target queue are added to the queue to be executed, so as to maximize resource utilization.
The execution unit 112 is configured to concurrently execute each request in the queue to be executed in the traversal process.
In this embodiment, the concurrently executing, by the execution unit 112, each request in the queue to be executed includes:
constructing threads with a configured quantity, and utilizing the threads to concurrently execute each request in the queue to be executed;
the configuration number is 64, and the size of the queue to be executed is 3 times of the number of concurrent requests per second.
The configuration number 64 may be obtained by performing a test according to a tool such as a performance pressure test.
For example: under the same query condition, 8, 32, 64 and 128 threads are respectively used for executing the query operation concurrently, and it can be seen that when the database query is executed concurrently by using 64 threads, the concurrency capability is best, and the configuration number is determined to be 64.
Similarly, it can be understood that if the queue to be executed is too large, the experience of concurrent requests and responses by the client is not facilitated, a client subsequently connected to the server may wait too long to start acquiring data, and the experience of the server receiving more or less concurrent requests by the client is different. If the queue is too small, the 64 threads are idle after being executed, and because the scheduling thread has not yet had time to put the request task into the queue to be executed, the server resources are not fully utilized, and the concurrency performance is not fully exerted. If the tested number of concurrent requests per second for concurrently executing the request task under 64 threads is about 1000, the size of the queue to be executed can be configured to be 3 times the number of concurrent requests per second so as to fully utilize 64 threads, in order to avoid waiting of 64 threads during concurrent execution and reduce the influence on the concurrent experience.
In this embodiment, when each request in the queue to be executed is executed concurrently, data may be obtained from the distributed database according to the request content of each request.
Further, after the data is obtained, the obtained data can be analyzed and converted according to the requirements of each client for the data format, the data result is converted into a protocol packet format agreed with the client, and the protocol packet format is issued to the client.
According to the technical scheme, the request can be subjected to traversal polling based on the constructed client array, the request queue and the global array, so that the query request and the playback request do not need to be distinguished when the server schedules the request, different request types are treated without differentiation, different types of system resources do not need to be considered for balancing, automatic resource balancing is realized, the client request concurrency, the request execution throughput and the processing efficiency are higher, and the client concurrency experience is indirectly improved.
Fig. 5 is a schematic structural diagram of a computer device according to a preferred embodiment of the present invention.
The computer device 1 may comprise a memory 12, a processor 13 and a bus, and may further comprise a computer program, such as a request scheduler, stored in the memory 12 and executable on the processor 13.
It will be understood by those skilled in the art that the schematic diagram is merely an example of the computer device 1, and does not constitute a limitation to the computer device 1, the computer device 1 may have a bus-type structure or a star-shaped structure, the computer device 1 may further include more or less other hardware or software than those shown, or different component arrangements, for example, the computer device 1 may further include an input and output device, a network access device, etc.
It should be noted that the computer device 1 is only an example, and other electronic products that are currently available or may come into existence in the future, such as electronic products that can be adapted to the present invention, should also be included in the scope of the present invention, and are included herein by reference.
The memory 12 includes at least one type of readable storage medium, including flash memory, removable hard disks, multimedia cards, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disks, optical disks, and the like. The memory 12 may in some embodiments be an internal storage unit of the computer device 1, e.g. a removable hard disk of the computer device 1. The memory 12 may also be an external storage device of the computer device 1 in other embodiments, such as a plug-in removable hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the computer device 1. Further, the memory 12 may also include both an internal storage unit and an external storage device of the computer device 1. The memory 12 can be used not only for storing application software installed in the computer apparatus 1 and various types of data such as a code of a request scheduler or the like, but also for temporarily storing data that has been output or will be output.
The processor 13 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 13 is a Control Unit (Control Unit) of the computer device 1, connects various components of the whole computer device 1 by various interfaces and lines, and executes various functions of the computer device 1 and processes data by running or executing programs or modules (e.g., executing a request scheduler, etc.) stored in the memory 12 and calling data stored in the memory 12.
The processor 13 executes the operating system of the computer device 1 and various installed application programs. The processor 13 executes the application program to implement the steps in the various request scheduling method embodiments described above, such as the steps shown in fig. 1.
Illustratively, the computer program may be partitioned into one or more modules/units, which are stored in the memory 12 and executed by the processor 13 to implement the present invention. The one or more modules/units may be a series of computer readable instruction segments capable of performing certain functions, which are used to describe the execution of the computer program in the computer device 1. For example, the computer program may be partitioned into a building unit 110, a traversal unit 111, and an execution unit 112.
The integrated unit implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a computer device, or a network device) or a processor (processor) to execute parts of the request scheduling method according to the embodiments of the present invention.
The modules/units integrated by the computer device 1 may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments described above may be implemented.
Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U.S. disk, removable hard disk, magnetic diskette, optical disk, computer Memory, read-Only Memory (ROM), random access Memory, etc.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one line is shown in FIG. 5, but that does not indicate only one bus or type of bus. The bus is arranged to enable connection communication between the memory 12 and at least one processor 13 etc.
Although not shown, the computer device 1 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 13 through a power management device, so that functions of charge management, discharge management, power consumption management and the like are realized through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The computer device 1 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the computer device 1 may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used to establish a communication connection between the computer device 1 and other computer devices.
Optionally, the computer device 1 may further comprise a user interface, which may be a Display (Display), an input unit, such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the computer device 1 and for displaying a visualized user interface.
It is to be understood that the embodiments described are illustrative only and are not to be construed as limiting the scope of the claims.
Fig. 5 shows only a computer device 1 with components 12-13, and it will be understood by a person skilled in the art that the structure shown in fig. 5 does not constitute a limitation of the computer device 1, and may comprise fewer or more components than shown, or a combination of certain components, or a different arrangement of components.
With reference to fig. 1-3, the memory 12 of the computer device 1 stores a plurality of instructions to implement a request scheduling method, and the processor 13 executes the plurality of instructions to implement:
constructing a client array, and constructing a request queue with different weight values for each client in the client array;
constructing a global array corresponding to each client, wherein the size of the global array corresponding to each client is 5, each element in the global array corresponding to each client corresponds to an index queue, and the element in each index queue corresponds to the weight value of the request queue;
when the request scheduling is carried out, sequentially traversing each client in the client array;
in the traversing process, determining a client traversed for the next time as a current client, continuously polling each element in a global array corresponding to the current client from the last polled element when the current client is traversed until an index queue corresponding to an element is not empty, determining an index queue which is not empty as a target index queue, acquiring an element at the head of the queue in the target index queue as a target element, determining a request queue pointed by the target element as a target queue, stopping polling the global array corresponding to the current client, acquiring a configured number of requests from the target queue, adding the requests to a queue to be executed, and continuously traversing a next client adjacent to the current client in the client array;
and in the traversing process, executing each request in the queue to be executed concurrently.
Specifically, the specific implementation method of the instruction by the processor 13 may refer to the description of the relevant steps in the embodiments corresponding to fig. 1 to 3, which is not described herein again.
It should be noted that all the data involved in the present application are legally acquired.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The invention is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
Furthermore, it will be obvious that the term "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the present invention may also be implemented by one unit or means through software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A request scheduling method is applied to a server side, and the request scheduling method comprises the following steps:
constructing a client array, and constructing a request queue with different weight values for each client in the client array;
constructing a global array corresponding to each client, wherein the size of the global array corresponding to each client is 5, each element in the global array corresponding to each client corresponds to one index queue, and the element in each index queue corresponds to the weight value of the request queue;
when the request scheduling is carried out, sequentially traversing each client in the client array;
in the traversing process, determining a client traversed for the next time as a current client, continuously polling each element in a global array corresponding to the current client from an element polled last time when the current client is traversed until an index queue corresponding to the element is not empty, determining an index queue which is not empty as a target index queue, acquiring an element at the head of the target index queue as a target element, determining a request queue pointed by the target element as a target queue, stopping polling the global array corresponding to the current client, acquiring requests of a configuration number from the target queue, adding the requests to a queue to be executed, and continuously traversing a next client adjacent to the current client in the client array;
and in the traversing process, executing each request in the queue to be executed concurrently.
2. The request scheduling method of claim 1, wherein the building a client array comprises:
when detecting that a client accesses the server, acquiring a client identifier of each client accessed;
and determining the client identifier of each client as an element, and forming an array according to the access sequence of each client to obtain the client array.
3. The request scheduling method of claim 2, wherein after building the client array, the method further comprises:
when detecting that the client is disconnected with the server, determining the disconnected client as a target client;
deleting the client identifier of the target client from the client array, sequentially filling the client identifiers of other clients behind the target client in the client array forwards, and performing zero filling processing on the last element in the client array;
when detecting that a new client is accessed to the server, acquiring a client identifier of the new client;
and replacing the element value of the last element by the acquired client identification.
4. The request scheduling method of claim 1, wherein said constructing a request queue of different weight values for each client in the client array comprises:
adding a request with the weight value of 1 to the queue to obtain a first sub-queue of each client;
adding the request with the weight value of 2 to the queue to obtain a second sub-queue of each client;
adding the request with the weight value of 3 to the queue to obtain a third sub-queue of each client;
adding the request with the weight value of 5 to the queue to obtain a fourth sub-queue of each client;
and determining a first sub-queue, a second sub-queue, a third sub-queue and a fourth sub-queue corresponding to each client as a request queue corresponding to each client.
5. The request scheduling method of claim 1, wherein after obtaining a configured number of requests from the target queue and adding the requests to a queue to be executed, the method further comprises:
acquiring an array subscript of the target index queue;
acquiring a weight value of the target queue;
calculating the sum of the array subscript and the weight value to obtain a first numerical value;
performing a remainder operation on the 5 by using the first numerical value to obtain a current array subscript;
acquiring an index queue corresponding to the current array subscript as a current index queue;
and adding the weighted value of the target queue as an element to the tail of the current index queue, and deleting the element corresponding to the weighted value of the target queue from the target index queue.
6. The request scheduling method of claim 1, wherein when a configured number of requests from the target queue are added to a queue to be executed, the method further comprises:
when the number of the requests in the target queue is smaller than the configuration number, all the requests in the target queue are added to the queue to be executed.
7. The request scheduling method of claim 1, wherein the concurrently executing each request in the queue to be executed comprises:
constructing threads with a configured quantity, and concurrently executing each request in the queue to be executed by utilizing the threads;
the configuration number is 64, and the size of the queue to be executed is 3 times of the number of concurrent requests per second.
8. A request scheduling apparatus, operating on a server side, the request scheduling apparatus comprising:
the system comprises a construction unit, a data processing unit and a data processing unit, wherein the construction unit is used for constructing a client array and constructing a request queue with different weight values for each client in the client array;
the building unit is further configured to build a global array corresponding to each client, where the size of the global array corresponding to each client is 5, each element in the global array corresponding to each client corresponds to one index queue, and an element in each index queue corresponds to a weight value of the request queue;
the traversing unit is used for sequentially traversing each client in the client array when the request scheduling is carried out;
the traversal unit is further configured to determine, in a traversal process, a client that is traversed this time as a current client, and when traversing the current client, continue to poll each element in a global array corresponding to the current client from an element that is polled last time until an index queue corresponding to an element is not empty, determine an index queue that is not empty as a target index queue, acquire an element at a head of a queue in the target index queue as a target element, determine a request queue pointed by the target element as a target queue, stop polling the global array corresponding to the current client, acquire a configured number of requests from the target queue, add the requests to a queue to be executed, and continue to traverse a next client that is adjacent to the current client in the client array;
and the execution unit is used for concurrently executing each request in the queue to be executed in the traversal process.
9. A computer device, characterized in that the computer device comprises:
a memory storing at least one instruction; and
a processor executing instructions stored in the memory to implement the request scheduling method of any of claims 1 to 7.
10. A computer-readable storage medium, characterized in that: the computer-readable storage medium has stored therein at least one instruction that is executable by a processor in a computer device to implement the request scheduling method of any one of claims 1 to 7.
CN202211189676.0A 2022-09-28 2022-09-28 Request scheduling method, device, equipment and medium Active CN115277859B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211189676.0A CN115277859B (en) 2022-09-28 2022-09-28 Request scheduling method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211189676.0A CN115277859B (en) 2022-09-28 2022-09-28 Request scheduling method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN115277859A CN115277859A (en) 2022-11-01
CN115277859B true CN115277859B (en) 2022-12-06

Family

ID=83757060

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211189676.0A Active CN115277859B (en) 2022-09-28 2022-09-28 Request scheduling method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN115277859B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2923112A1 (en) * 2007-10-24 2009-05-01 Seanodes Sa Computer-based system, has storage server with scheduler that executes access requests contained in queue in determined order and determines order as function of rules forming performance criteria and based on state parameters of queue
CN111756811A (en) * 2020-05-29 2020-10-09 苏州浪潮智能科技有限公司 Method, system, device and medium for actively pushing distributed system
CN114741422A (en) * 2022-06-09 2022-07-12 深圳华锐分布式技术股份有限公司 Query request method, device, equipment and medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009006B (en) * 2016-11-02 2022-02-18 华为技术有限公司 Scheduling method and device of I/O (input/output) request

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2923112A1 (en) * 2007-10-24 2009-05-01 Seanodes Sa Computer-based system, has storage server with scheduler that executes access requests contained in queue in determined order and determines order as function of rules forming performance criteria and based on state parameters of queue
CN111756811A (en) * 2020-05-29 2020-10-09 苏州浪潮智能科技有限公司 Method, system, device and medium for actively pushing distributed system
WO2021238287A1 (en) * 2020-05-29 2021-12-02 苏州浪潮智能科技有限公司 Method, system and device for active pushing of distributed system, and medium
CN114741422A (en) * 2022-06-09 2022-07-12 深圳华锐分布式技术股份有限公司 Query request method, device, equipment and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Queuing Analysis for Content Placement and Request Scheduling in Data Center;Lijun Dong;《2020 International Conference on Computing, Networking and Communications (ICNC)》;20200330;全文 *
公网数字集群***的动态并发请求调度处理队列算法;李耀升;《计算机应用》;20220630;全文 *

Also Published As

Publication number Publication date
CN115277859A (en) 2022-11-01

Similar Documents

Publication Publication Date Title
CN114124968B (en) Load balancing method, device, equipment and medium based on market data
CN112559535B (en) Multithreading-based asynchronous task processing method, device, equipment and medium
CN111104210A (en) Task processing method and device and computer system
CN113890712A (en) Data transmission method and device, electronic equipment and readable storage medium
CN112256783A (en) Data export method and device, electronic equipment and storage medium
CN115617403A (en) Clearing task execution method, device, equipment and medium based on task segmentation
CN114880368A (en) Data query method and device, electronic equipment and readable storage medium
CN114844844A (en) Delay message processing method, device, equipment and storage medium
CN116755637B (en) Transaction data storage method, device, equipment and medium
CN115269709B (en) Inquiry and playback method, device, equipment and medium based on market data
CN115314570B (en) Data issuing method, device, equipment and medium based on protocol development framework
CN115277859B (en) Request scheduling method, device, equipment and medium
CN114675976B (en) GPU (graphics processing Unit) sharing method, device, equipment and medium based on kubernets
CN114816371B (en) Message processing method, device, equipment and medium
CN115345746A (en) Security transaction method, device, equipment and medium
CN113918305A (en) Node scheduling method and device, electronic equipment and readable storage medium
CN114185502A (en) Log printing method, device, equipment and medium based on production line environment
CN113918296A (en) Model training task scheduling execution method and device, electronic equipment and storage medium
CN114124835A (en) Interface-based data transmission method, device, equipment and medium
CN116701233B (en) Transaction system testing method, equipment and medium based on high concurrency report simulation
CN114860349B (en) Data loading method, device, equipment and medium
CN113971127A (en) List system testing method, device, equipment and medium based on big data structure
CN115065642B (en) Code table request method, device, equipment and medium under bandwidth limitation
CN115964307B (en) Automatic test method, device, equipment and medium for transaction data
CN116414699B (en) Operation and maintenance testing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant