CN107613025B - Message queue sequence reply-based implementation method and device - Google Patents

Message queue sequence reply-based implementation method and device Download PDF

Info

Publication number
CN107613025B
CN107613025B CN201711046688.7A CN201711046688A CN107613025B CN 107613025 B CN107613025 B CN 107613025B CN 201711046688 A CN201711046688 A CN 201711046688A CN 107613025 B CN107613025 B CN 107613025B
Authority
CN
China
Prior art keywords
instruction
server
distribution server
sub
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711046688.7A
Other languages
Chinese (zh)
Other versions
CN107613025A (en
Inventor
夏渊
陈志�
余春平
卜勤练
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Accelink Technologies Co Ltd
Original Assignee
Accelink Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Accelink Technologies Co Ltd filed Critical Accelink Technologies Co Ltd
Priority to CN201711046688.7A priority Critical patent/CN107613025B/en
Publication of CN107613025A publication Critical patent/CN107613025A/en
Application granted granted Critical
Publication of CN107613025B publication Critical patent/CN107613025B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Transfer Between Computers (AREA)

Abstract

The invention relates to the technical field of sequential message queue receiving and sending, and provides a method and a device for realizing sequential reply based on a message queue. The method comprises the steps that after a distribution server receives a request message of a client, the request message is split into a group of instruction queues which are sequentially arranged; distributing each instruction in the instruction queue according to the maintained sub server list and the historical time length of each instruction in the instruction queue completed by each sub server, so that the total time length of each instruction executed and completed by each sub server is shortest; and after receiving the processing results of the sub-servers, the distribution server combines the processing results according to the corresponding instruction sequence to obtain a response message. The invention provides a task allocation mechanism, so that the request message can be divided into a plurality of instructions, and the optimal instruction allocation mode for completing the current request message is calculated according to the time for processing the corresponding instruction by each sub-server obtained by historical training, thereby improving the request message processing efficiency in the existing system.

Description

Message queue sequence reply-based implementation method and device
[ technical field ] A method for producing a semiconductor device
The invention relates to the technical field of sequential message queue receiving and sending, in particular to a method and a device for realizing sequential reply based on a message queue.
[ background of the invention ]
In network communication, a common mode is a Client (Client)/Server (Server) communication mode (usually referred to as a C/S mode, which is different from a browser B/S mode), a Server provides various services (such as database query, FTP file download, etc.), and a Client requests a Server for a required service. The feature of this approach is that the client sends a request to the server and waits for a server reply. In practical applications, there is a scenario where a client requests a service from our server, and our server needs to query other distributed servers for data, and the network model is as shown in fig. 1.
In fig. 1, a Server directly interacting with a client is called a Distributor (Distributor), which is called a dist for short, because it is used to forward an instruction to a backend Server and return a response of the backend Server to the client, and the backend Server1, servers 2, and … Server n provide specific services.
In such a scenario, the response time of the distributed server to different instructions is different, some instruction processing logics are complex and time-consuming, and some instructions are simple and quick to query, so that the order of instruction response and the order of transmission are inconsistent. For example, the client sequentially sends 3 instructions a, B, and C to the server dist, and then the dist sequentially distributes the instructions a, B, and C to the server in the background. The inventors have found that the order of establishing the response of the command and feeding back the response content corresponding to the order of the request content in the request message is a necessary condition for ensuring that the operator can effectively recognize the response message. Thus, the ordered combination of the response messages is involved, and the prior art only refers to whether the sub-server is idle to distribute the instructions, which may cause the delay of the response message feedback corresponding to the whole request message.
[ summary of the invention ]
The technical problem to be solved by the present invention is that in order to achieve accurate readability of response messages in the prior art, ordered combinations of response messages are involved, and then in the prior art, only the sub-server is referred to whether being idle or not to distribute instructions, which may cause a delay in response message feedback corresponding to the entire request message.
The invention adopts the following technical scheme:
in a first aspect, the present invention provides a method for implementing sequential reply based on message queues, where a distribution server replies response contents to a client, and the response contents are arranged according to an instruction queue sequence received by the distribution server, and the method includes:
after receiving a request message of a client, a distribution server splits the request message into a group of instruction queues which are arranged in sequence;
distributing each instruction in the instruction queue according to the maintained sub server list and the historical time length of each instruction in the instruction queue completed by each sub server, so that the total time length of each instruction executed and completed by each sub server is shortest;
and after receiving the processing results of the sub-servers, the distribution server combines the processing results according to the corresponding instruction sequence to obtain a response message and sends the response message to the client.
Preferably, the implementation method further includes:
confirming various request messages which can be received by a distribution server; determining an instruction queue which can be split by the various request messages, wherein the instructions of each queue member in the instruction queue are processed by the sub-server and then fed back to the distribution server;
and registering the respective use time of each sub-server for completing the corresponding queue member in each instruction queue, so that the distribution server establishes uniform maintenance corresponding to each instruction in the processing time required by each sub-server managed by the distribution server.
Preferably, the method further comprises:
the distribution server adjusts the granularity of the dividing instruction of the request message and calculates the time required for completing the integration of the response message under the condition of different granularity of the dividing instruction;
the granularity of the partitioning instructions allocated to each request message is determined by the time ordering required to complete the integration of the response messages.
Preferably, after receiving a plurality of request messages of a plurality of clients, the distribution server splits the plurality of request messages into one or more groups of instruction queues arranged in sequence;
and combining the same instructions contained in the instruction queues corresponding to each group into an instruction to be distributed to the sub-servers, and establishing a processing result mapping list of the corresponding instruction so as to configure the processing result mapping list into the processing result queues of each response message according to the processing result mapping list when the processing result corresponding to the combined instruction is obtained.
Preferably, the system further comprises a standby distribution server, specifically:
when the distribution server is in a normal working state, the standby distribution server is used for temporarily backing up a response message sent to the client by the distribution server;
when the distribution server confirms that the command queues of the first client processed by the sub-servers are collected and overtime, searching the latest response message similar to the command queue of the first client from the standby distribution server, and sending the response message to the first client as transition display under the condition of marking sources; and after completing the overtime instruction processing result, integrating into a response message and sending to the first client.
Preferably, when distributing each instruction in the instruction queue, a mapping relationship is established between one or more items of the IP, the physical address, and the virtual identifier of the sub server corresponding to the received instruction and the corresponding instruction in the instruction queue.
Preferably, the client is a personal computer, and the distribution server is an optical transmission board card.
Preferably, the request message includes: database query, FTP file download, module state acquisition and module parameter setting.
In a second aspect, the present invention further provides an implementation apparatus for sequential reply based on a message queue, where the implementation apparatus includes an instruction queue generating module, an instruction allocating module, and a processing result combining module, where the modules are connected in sequence, specifically:
the instruction queue generating module is used for splitting the received request message of the client into a group of instruction queues which are arranged in sequence;
the instruction distribution module is used for distributing each instruction in the instruction queue according to the maintained sub-server list and the historical time length of each instruction in the instruction queue completed by each sub-server, so that the total time length of each instruction executed and completed by each sub-server is shortest;
and the processing result combination module is used for combining the processing results according to the corresponding instruction sequence to obtain a response message after the distribution server receives the processing results of each sub-server, and sending the response message to the client.
In a third aspect, the present invention further provides an implementation apparatus for sequential reply based on a message queue, which is used to implement the implementation method for sequential reply based on a message queue in the first aspect, and the apparatus includes:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being programmed to perform a method of implementing message queue order based replies according to the first aspect.
In a fourth aspect, the present invention further provides a non-transitory computer storage medium storing computer-executable instructions, which are executed by one or more processors, for implementing the method for message queue order reply based implementation described in the first aspect.
The invention provides a task allocation mechanism, so that the request message can be divided into a plurality of instructions, and the optimal instruction allocation mode for completing the current request message is calculated according to the time for processing the corresponding instruction by each sub-server obtained by historical training, thereby improving the request message processing efficiency in the existing system.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below. It is obvious that the drawings described below are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is an architecture diagram for implementing sequential reply based on message queues in the prior art according to an embodiment of the present invention;
fig. 2 is a flowchart of an implementation method for reply based on a message queue sequence according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating the learning of the computing power of the sub-server in the method for implementing sequential reply based on message queues according to the embodiment of the present invention;
fig. 4 is a flowchart of instruction granularity learning for a distribution server in an implementation method based on message queue sequential reply according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an implementation apparatus for sequential reply based on a message queue according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an implementation apparatus for sequential reply based on a message queue according to an embodiment of the present invention.
[ detailed description ] embodiments
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the description of the present invention, the terms "inner", "outer", "longitudinal", "lateral", "upper", "lower", "top", "bottom", and the like indicate orientations or positional relationships based on those shown in the drawings, and are for convenience only to describe the present invention without requiring the present invention to be necessarily constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention.
In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Example 1:
embodiment 1 of the present invention provides an implementation method based on message queue order reply, where response content replied to a client by a distribution server is arranged according to an instruction queue order received by the distribution server, as shown in fig. 2, the implementation method includes:
in step 201, after receiving a request message from a client, a distribution server splits the request message into a set of instruction queues arranged in sequence.
The client is specifically a personal computer, and the distribution server is an optical transmission board card. The client can establish a data path with the distribution server through a serial port and/or a network port.
Wherein the request message comprises: database query, FTP file download, module state acquisition and module parameter setting.
In step 202, according to the maintained sub-server list, according to the historical time length of each instruction in the instruction queue completed by each sub-server, each instruction in the instruction queue is allocated, so that the total time length of each instruction executed and completed by each sub-server is shortest.
In step 203, after receiving the processing results of each sub-server, the distribution server combines the processing results according to the corresponding instruction sequence to obtain a response message, and sends the response message to the client.
The embodiment of the invention provides a task allocation mechanism, so that the request message can be divided into a plurality of instructions, and the optimal instruction allocation mode for completing the current request message is calculated according to the historical time of each sub-server for processing the corresponding instruction obtained by historical training, thereby improving the processing efficiency of the request message in the existing system.
The historical time for processing each instruction by each sub-server can be obtained by summarizing the corresponding instructions distributed by the distribution server when each sub-server is idle; the distribution server can also be obtained by summarizing the completion time for processing each instruction in the actual combat of each sub-server; or a combination of the two ways, and the average time calculated by accumulation is used as the historical time for the corresponding sub-server to process each instruction. Wherein an instruction and a sub-server generate a historical time.
Since the technical field to which the embodiment of the present invention is applied relates to the field of system architectures of optical modules and optical transmission boards, for various request messages of clients, the number of instructions that can be split or the request contents that can be included in the instruction are all of limited types, therefore, in combination with the embodiment of the present invention, there is a preferred supplementary solution, that is, the execution capability of each sub-server for each single instruction in a limited instruction set can be pre-determined, as shown in fig. 3, the implementation method further includes:
in step 301, various request messages that can be received by the distribution server are acknowledged.
Here, the various request messages that the confirmation distribution server can receive may be a large data summary based on the history request messages, or may be generated based on a dynamic combination of functions of the clients and/or functions of the sub servers.
In step 302, the instruction queues that can be split out of the various request messages are determined, wherein the instructions of each queue member in the instruction queues are processed by the sub-server, and then the processing results are fed back to the distribution server.
In step 303, the respective times of use of the respective queue members in the respective instruction queues by the respective sub-servers are registered, so that the distribution server establishes uniform maintenance of the processing time required in the respective sub-servers managed by the distribution server corresponding to each instruction.
Through the step 301 and the step 303, the time required for the execution completion of each sub-server corresponding to each identical instruction can be obtained, so as to provide a reference basis for how to allocate the sub-server with the optimal execution efficiency/most suitable sub-server to each instruction when the distribution server receives the request message in the specific application environment and further splits the request message into the instruction queue.
In an embodiment of the present invention, in order to further improve the accuracy of the distribution server analyzing the request message to obtain the instruction queue and the efficiency of the sub-server executing the instruction, a better improvement scheme is further provided based on the learning process of the sub-server, as shown in fig. 4, where the method further includes:
in step 401, the distribution server adjusts the granularity of the dividing instruction of the request message, and calculates the time required for completing the integration of the response message under the condition of different granularity of the dividing instruction.
In step 402, the granularity of the partitioning instructions allocated to each request message is determined by the time ordering required to complete the integration of the response messages.
The above-mentioned expansion scheme of step 401-402 proposes that the granularity of division based on the granularity of instructions has a great influence on the time for finally generating the response message, and the inventor finds in an experimental process that, if the granularity of the divided instructions is too fine, then since a transition result between some instructions needs to be subsequently referred to, an excessively fine division will increase the amount of data interaction between the distribution server and the sub-server, and may bring about the following results: compared with the rate increase brought by a plurality of sub-servers executing instructions obtained under a plurality of specified-granularity partitions, the time consumed by data interaction between the distribution server and the sub-servers is exhausted, and the total time consumption is not as high as that required by the previous large-granularity partition instruction.
In the embodiment of the present invention, in addition to the analysis of the request message for the client, in a specific application process, a situation that the distribution server receives multiple request messages from multiple clients is usually involved, so that after the distribution server receives multiple request messages from multiple clients, the multiple request messages are respectively split into one or more sets of instruction queues arranged in sequence;
and combining the same instructions contained in the instruction queues corresponding to each group into an instruction to be distributed to the sub-servers, and establishing a processing result mapping list of the corresponding instruction so as to configure the processing result mapping list into the processing result queues of each response message according to the processing result mapping list when the processing result corresponding to the combined instruction is obtained.
In a specific application, a standby distribution server is usually provided for the distribution server, so to further improve the implementation effect and the efficiency of feeding back the response message in the embodiment of the present invention, an extended implementation scheme for applying the standby distribution server to the method in the embodiment of the present invention is further provided, as shown in fig. 5, when the distribution server is in a normal operating state, specifically:
in step 501, the standby distribution server is used to temporarily backup a response message sent by the distribution server to a client.
In step 502, when the distribution server confirms that the collection of the instruction queue of the first client processed by each sub-server is overtime, the latest response message similar to the instruction queue of the first client is searched from the standby distribution server, and in the case of labeling a source, the latest response message is sent to the first client as a transition display.
In step 503, after completing the overtime command processing result, integrating into a response message and sending to the first client.
The idea of the implementation scheme of step 501 and 503 is that if the response message corresponding to the current request message is not executed by the sub-server because a certain instruction constituting the response message is completed, or the execution of the sub-server is overtime, the distribution server may send back to the client after making a corresponding label by calling a processing result, which is historically the same as or similar to that of the current request message, to the standby distribution server. For example: for the same request message M, the same request message M is requested by a client X before 10 minutes, and after a distribution server feeds a response message M back to the client X historically, the response message M is backed up to the backup distribution server; at this time, if the distribution server receives the request message M sent by the client Y again, when the distribution server distributes to the sub-servers to execute the instruction queue corresponding to the request message M, the history response content X corresponding to the request message M in history stored in the backup server can be called and fed back to the client Y. At this time, the display interface of the client Y can display the response content corresponding to the request message M in history, and after the distribution server completes integration of the current response message, the response content is fed back to the client Y, so that the display interface of the client Y can update the response content carried in the latest response message. Corresponding to the processing mode, the display interface of the client Y can express the label given by the distribution server in a gray or semi-transparent mode, so that before the client Y is given with the response content of the current request message, a historical response content can be presented as a transition display, and the information acquisition effectiveness of an operator for the same or similar request messages is improved.
In the embodiment of the present invention, when each instruction in the instruction queue is dispatched, a mapping relationship may be established between one or more items of an IP, a physical address, and a virtual identifier of a sub server corresponding to the received instruction and a corresponding instruction in the instruction queue.
Example 2:
an embodiment of the present invention further provides an implementation apparatus for sequential reply based on a message queue, which is used to execute the implementation method for sequential reply based on a message queue described in embodiment 1, and as shown in fig. 5, the implementation apparatus includes an instruction queue generating module, an instruction allocating module, and a processing result combining module, where the modules are connected in sequence, specifically:
the instruction queue generating module splits the received request message of the client into a group of instruction queues which are arranged in sequence.
And the instruction distribution module is used for distributing each instruction in the instruction queue according to the maintained sub-server list and the historical time length of each instruction in the instruction queue completed by each sub-server, so that the total time length of the execution completion of each instruction by each sub-server is shortest.
And the processing result combination module is used for combining the processing results according to the corresponding instruction sequence to obtain a response message after the distribution server receives the processing results of each sub-server, and sending the response message to the client.
It should be noted that, because the contents of information interaction, execution process, and the like between the modules and units in the device are based on the same concept as the processing method embodiment of the present invention, specific contents may refer to the description in the method embodiment 1 of the present invention, and are not described herein again.
Example 3:
please refer to fig. 6, which is a block diagram illustrating an apparatus for implementing message queue order reply according to an embodiment of the present invention. The message queue order reply-based implementation device of the present embodiment includes one or more processors 61 and a memory 62. In fig. 6, one processor 61 is taken as an example.
The processor 61 and the memory 62 may be connected by a bus or other means, such as the bus connection in fig. 6.
The memory 62, as a non-volatile computer-readable storage medium for implementing the method and apparatus based on message queue order reply, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as the implementing method based on message queue order reply in embodiment 1 and corresponding program instructions/modules (e.g., the respective modules shown in fig. 5). The processor 61 executes various functional applications and data processing of the device for implementing message queue order reply by running the nonvolatile software program, instructions and modules stored in the memory 62, that is, implements the method for implementing message queue order reply in embodiment 1 and the functions of the respective modules and units in embodiment 2.
The memory 62 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 62 may optionally include memory located remotely from the processor 61, and these remote memories may be connected to the processor 61 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The program instructions/modules are stored in the memory 62, and when executed by the one or more processors 61, perform the implementation method of message queue order reply based in the above embodiment 1, for example, perform the steps shown in fig. 2, fig. 3 and fig. 4 described above; the various modules, units described in fig. 5 may also be implemented.
It should be noted that, because the contents of information interaction, execution process, and the like between the modules and units in the device are based on the same concept as the processing method embodiment of the present invention, specific contents may refer to the description in the method embodiment of the present invention, and are not described herein again.
Those of ordinary skill in the art will appreciate that all or part of the steps of the various methods of the embodiments may be implemented by associated hardware as instructed by a program, which may be stored on a computer-readable storage medium, which may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (8)

1. A realization method based on message queue order reply, response content replied to a client by a distribution server is arranged according to an instruction queue order received by the distribution server, and the realization method is characterized by comprising the following steps:
after receiving a request message of a client, a distribution server splits the request message into a group of instruction queues which are arranged in sequence;
distributing each instruction in the instruction queue according to the maintained sub server list and the historical time length of each instruction in the instruction queue completed by each sub server, so that the total time length of each instruction executed and completed by each sub server is shortest;
after receiving the processing results of each sub-server, the distribution server combines the processing results according to the corresponding instruction sequence to obtain a response message and sends the response message to the client;
still include reserve distribution server, it is specific:
when the distribution server is in a normal working state, the standby distribution server is used for temporarily backing up a response message sent to the client by the distribution server;
when the distribution server confirms that the command queues of the first client processed by the sub-servers are collected and overtime, searching the latest response message similar to the command queue of the first client from the standby distribution server, and sending the response message to the first client as transition display under the condition of marking sources; and after completing the overtime instruction processing result, integrating into a response message and sending to the first client.
2. The method for implementing message queue order reply according to claim 1, further comprising:
confirming various request messages which can be received by a distribution server; determining an instruction queue which can be split by the various request messages, wherein the instructions of each queue member in the instruction queue are processed by the sub-server and then fed back to the distribution server;
and registering the respective use time of each sub-server for completing the corresponding queue member in each instruction queue, so that the distribution server establishes uniform maintenance corresponding to each instruction in the processing time required by each sub-server managed by the distribution server.
3. The method for implementing message queue order reply according to claim 2, wherein the method further comprises:
the distribution server adjusts the granularity of the dividing instruction of the request message and calculates the time required for completing the integration of the response message under the condition of different granularity of the dividing instruction;
the granularity of the partitioning instructions allocated to each request message is determined by the time ordering required to complete the integration of the response messages.
4. The method according to claim 1, wherein the distribution server splits a plurality of request messages from a plurality of clients into one or more sequentially arranged instruction queues after receiving the request messages;
and combining the same instructions contained in the instruction queues corresponding to each group into an instruction to be distributed to the sub-servers, and establishing a processing result mapping list of the corresponding instruction so as to configure the processing result mapping list into the processing result queues of each response message according to the processing result mapping list when the processing result corresponding to the combined instruction is obtained.
5. The method for implementing message queue order based reply according to any one of claims 1 to 4, wherein when each instruction in the instruction queue is dispatched, one or more items of IP, physical address and virtual identification of the sub server corresponding to the received instruction are mapped with the corresponding instruction in the instruction queue.
6. The method for implementing message queue order reply according to any one of claims 1 to 4, wherein the client is a personal computer, and the distribution server is an optical transmission board.
7. The method for implementing message queue order-based reply according to any one of claims 1 to 4, wherein the request message comprises: database query, FTP file download, module state acquisition and module parameter setting.
8. An implementation apparatus for message queue order based reply, which is used for implementing the implementation method for message queue order based reply according to any one of claims 1 to 7, the apparatus comprising:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being programmed to perform a method of implementing message queue order reply based replies as claimed in any one of claims 1 to 7.
CN201711046688.7A 2017-10-31 2017-10-31 Message queue sequence reply-based implementation method and device Active CN107613025B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711046688.7A CN107613025B (en) 2017-10-31 2017-10-31 Message queue sequence reply-based implementation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711046688.7A CN107613025B (en) 2017-10-31 2017-10-31 Message queue sequence reply-based implementation method and device

Publications (2)

Publication Number Publication Date
CN107613025A CN107613025A (en) 2018-01-19
CN107613025B true CN107613025B (en) 2021-01-08

Family

ID=61084345

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711046688.7A Active CN107613025B (en) 2017-10-31 2017-10-31 Message queue sequence reply-based implementation method and device

Country Status (1)

Country Link
CN (1) CN107613025B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108881492B (en) * 2018-08-08 2020-03-31 百卓网络科技有限公司 Method for solving message processing sequence and dredging message processing
CN111488176B (en) * 2019-01-25 2023-04-18 阿里巴巴集团控股有限公司 Instruction scheduling method, device, equipment and storage medium
CN111488222B (en) * 2019-01-25 2023-04-07 杭州海康威视数字技术股份有限公司 Stream aggregation method and device and electronic equipment
CN110362342A (en) * 2019-04-30 2019-10-22 广东侍卫长北斗科技股份公司 A kind of puppy parc gateway TCPUDP server software implementation method
CN112148683A (en) * 2019-06-26 2020-12-29 ***通信集团江西有限公司 File collection method and device
CN110750349B (en) * 2019-10-26 2022-07-29 武汉中海庭数据技术有限公司 Distributed task scheduling method and system
CN111460504B (en) * 2020-03-31 2021-11-05 腾讯科技(深圳)有限公司 Service processing method, device, node equipment and storage medium
CN112486912A (en) * 2020-12-03 2021-03-12 北京普睿德利科技有限公司 File conversion system, method, electronic equipment and storage medium
CN113094177A (en) * 2021-04-21 2021-07-09 上海商汤科技开发有限公司 Task distribution system, method and device, computer equipment and storage medium
CN114339044A (en) * 2021-12-29 2022-04-12 天津天地伟业智能安全防范科技有限公司 High-throughput snapshot method and device based on message queue

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104077188A (en) * 2013-03-29 2014-10-01 西门子公司 Method and device for scheduling tasks
CN104765640B (en) * 2014-01-02 2018-02-16 中国科学院声学研究所 A kind of intelligent Service dispatching method
CN105740059B (en) * 2014-12-11 2018-12-04 中国科学院声学研究所 A kind of population dispatching method towards Divisible task
CN106161487A (en) * 2015-03-23 2016-11-23 多媒体影像解决方案有限公司 The information processing method of server
CN104699542B (en) * 2015-03-31 2018-02-09 北京奇艺世纪科技有限公司 Task processing method and system
CN105046399A (en) * 2015-06-09 2015-11-11 广西家之宝网络科技有限公司 O2O e-commerce platform based on distributed transaction coordination and control
CN105897836A (en) * 2015-12-07 2016-08-24 乐视云计算有限公司 Back source request processing method and device
CN107181787B (en) * 2017-03-21 2020-07-17 阿里巴巴集团控股有限公司 Request processing method and device

Also Published As

Publication number Publication date
CN107613025A (en) 2018-01-19

Similar Documents

Publication Publication Date Title
CN107613025B (en) Message queue sequence reply-based implementation method and device
CN108293041B (en) Distributed system, resource container allocation method, resource manager and application controller
CN104834722A (en) CDN (Content Delivery Network)-based content management system
CN106462599B (en) Buffer memory is associated with the progress of work
KR20110060849A (en) Method and system for data distribution in high performance computing cluster
CN105373420B (en) Data transmission method and device
CN112231049A (en) Computing equipment sharing method, device, equipment and storage medium based on kubernets
CN114244717B (en) Configuration method and device of virtual network card resources, computer equipment and medium
CN112579622B (en) Method, device and equipment for processing service data
CN110458468A (en) A kind of task processing method, device, electronic equipment and storage medium
CN107682391B (en) Electronic device, server allocation control method, and computer-readable storage medium
CN106331192B (en) Network protocol IP address allocation method and device
CN111105006A (en) Deep learning network training system and method
US20160094415A1 (en) Allocating physical nodes for processes in an execution plan
US20160337470A1 (en) Method and server for assigning relative order to message by using vector clock and delivering the message based on the assigned relative order under distributed environment
CN113014611A (en) Load balancing method and related equipment
WO2021036319A1 (en) Data cleaning method and system
CN111179008B (en) Information state updating method, device, system and storage medium
CN108153494B (en) A kind of I/O request processing method and processing device
CN114489985A (en) Data processing method, device and storage medium
CN111338778B (en) Task scheduling method and device, storage medium and computer equipment
CN110275777B (en) Resource scheduling system
CN109005071B (en) Decision deployment method and scheduling equipment
CN112445763B (en) File operation method and device, electronic equipment and storage medium
CN109189581A (en) A kind of job scheduling method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant