CN116719630B - Case scheduling method, equipment, storage medium and device - Google Patents

Case scheduling method, equipment, storage medium and device Download PDF

Info

Publication number
CN116719630B
CN116719630B CN202311007477.8A CN202311007477A CN116719630B CN 116719630 B CN116719630 B CN 116719630B CN 202311007477 A CN202311007477 A CN 202311007477A CN 116719630 B CN116719630 B CN 116719630B
Authority
CN
China
Prior art keywords
case
priority
target
flow control
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311007477.8A
Other languages
Chinese (zh)
Other versions
CN116719630A (en
Inventor
赖继鹏
谢陆豪
徐厚雨
杨明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Post Consumer Finance Co ltd
Original Assignee
China Post Consumer Finance Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Post Consumer Finance Co ltd filed Critical China Post Consumer Finance Co ltd
Priority to CN202311007477.8A priority Critical patent/CN116719630B/en
Publication of CN116719630A publication Critical patent/CN116719630A/en
Application granted granted Critical
Publication of CN116719630B publication Critical patent/CN116719630B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/484Precedence
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention belongs to the technical field of computers and discloses a case scheduling method, equipment, a storage medium and a device.

Description

Case scheduling method, equipment, storage medium and device
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a case scheduling method, apparatus, storage medium, and device.
Background
In recent years, with the high-speed development of internet finance, consumer finance is taken as an important plate of internet finance, and development is rapid, and meanwhile, the characteristics of various scenes of drainage institutions of the consumer finance, different approval timeliness, geometric explosion growth of case cardinal numbers and the like provide challenges for stability and timeliness of risk approval. How the consumer finance company performs on-line approval processing on the cases through automatic approval and how to approve and customize a large number of cases which are rapidly growing so as to realize business flow control, thereby ensuring continuous and stable operation of the business is very important;
in a conventional approval decision flow control system, the system is implemented by introducing independent third-party flow control software, such as: the method comprises the steps that a flow control function provided by Sentinel flow control software is used in a service front gateway, and cases are sequentially processed in a first-in first-out mode; or introducing buffer middleware such as Tair or Redis and the like to realize distributed flow control based on a leaky bucket algorithm or a token bucket algorithm;
however, the conventional approval decision flow control method has some obvious limitations and disadvantages, because the current mainstream flow control software and algorithm mostly perform sequential flow control processing based on predefined parameters and fixed flow thresholds, and when the service flow is larger, the flow control efficiency is lower, such as: sentinel is very dependent on the performance and availability of token server, and single-point token server is very likely to be a performance bottleneck, and generally needs to implement flow control in a cluster manner, so that system resources are also consumed. Therefore, the current approval decision flow control scheme cannot be correspondingly controlled by combining the service scene and the case type, so that the flow control efficiency is affected, and the timeliness and the stability of case processing are poor.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present invention and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The invention mainly aims to provide a case scheduling method, equipment, a storage medium and a device, and aims to solve the technical problems that the conventional approval decision flow control scheme cannot be combined with a service scene and a case type to carry out corresponding control, so that flow control efficiency is affected, and timeliness and stability of case processing are poor.
In order to achieve the above object, the present invention provides a case scheduling method, which includes the steps of:
priority matching is carried out on the target request case according to case attribute information corresponding to the target request case, priority marking is carried out on the target request case according to a priority matching result, and a priority marking result is obtained;
selecting a target flow control strategy from preset flow control strategies according to system load information corresponding to a preset arrangement service cluster;
automatically adjusting the case starting parameters of the target request case according to the target flow control strategy to obtain target case starting parameter information;
sorting the target request cases according to the priority marking result to obtain a case priority sorting list;
And calling a preset Kafka channel according to the target case starting parameter information and the case priority ranking list, and performing case scheduling on the target request case according to the preset Kafka channel and the preset scheduling service cluster.
Optionally, the preset flow control strategy includes a full-open flow control level strategy, a conventional flow control level strategy and a secondary fusing flow control level strategy; the step of selecting the target flow control strategy from the preset flow control strategies according to the system load information corresponding to the preset arrangement service cluster comprises the following steps:
collecting system load information corresponding to a preset arrangement service cluster in real time, wherein the system load information comprises load index information corresponding to case processing throughput rate, CPU (central processing unit) utilization rate, memory utilization rate and case average processing time length;
determining the load grade corresponding to each service node in the preset arrangement service cluster according to the load index information;
and selecting a target flow control strategy from the full-open flow control level strategy, the conventional flow control level strategy and the secondary fusing flow control level strategy according to the load levels corresponding to the service nodes.
Optionally, the step of automatically adjusting the case starting parameter of the target request case according to the target flow control policy to obtain target case starting parameter information includes:
If the target flow control strategy is a full-open flow control grade strategy, adjusting the case starting rate in the case starting parameters of the target request case to be the maximum value, and obtaining a first case starting parameter;
if the target flow control strategy is a conventional flow control grade strategy, dynamically adjusting the case starting rate in the case starting parameters of the target request case according to the actual load condition of the system to obtain a second case starting parameter;
if the target flow control strategy is a secondary fusing flow control level strategy, the case starting rate in the case starting parameters of the target request case is adjusted to be the minimum value, and a third case starting parameter is obtained;
and determining target case starting parameter information according to the first case starting parameter, the second case starting parameter and the third case starting parameter.
Optionally, the step of calling a preset Kafka channel according to the target case starting parameter information and the case priority ranking list, and performing case scheduling on the target request case according to the preset Kafka channel and the preset scheduling service cluster includes:
selecting a target case to be processed in a preset period and a case priority ordering list corresponding to the target case from the target request cases according to the case priority ordering list and the preset period processing capacity;
Acquiring case starting parameter information corresponding to the target case from the target case starting parameter information;
calling a preset Kafka channel according to the case starting parameter information corresponding to the target case and the case priority ranking list corresponding to the target case, and scheduling the target case according to the preset Kafka channel and the preset scheduling service cluster.
Optionally, the preset Kafka channel includes a Kafka channel corresponding to an emergency priority, a high priority, a medium priority and a low priority; the step of calling a preset Kafka channel according to the case starting parameter information corresponding to the target case and the case priority ranking list corresponding to the target case, and performing case scheduling on the target case according to the preset Kafka channel and the preset scheduling service cluster comprises the following steps:
selecting a target Kafka channel from the Kafka channels corresponding to the emergency priority, the high priority, the medium priority and the low priority according to a priority marking result in the case priority sorting list corresponding to the target case;
and carrying out case scheduling on the target case according to the target Kafka channel and the preset scheduling service node.
Optionally, after the step of calling a preset Kafka channel according to the target case starting parameter information, calling a preset Kafka channel according to the target case starting parameter information and the case priority ranking list, and performing case scheduling on the target request case according to the preset Kafka channel and the preset scheduling service cluster, the method further includes:
acquiring the number of messages to be processed of a preset Kafka message channel;
judging whether to start the bin processing operation according to the quantity of the to-be-processed messages of the Kafka message channel to obtain a judging result;
according to the judging result, carrying out bin storage processing on the target request cases, and carrying out bin storage priority marking on the cases needing bin storage to obtain a bin storage priority marking result;
inquiring the request cases except the cases with the bin priority identifiers from the case priority sorting list according to the bin priority marking result to obtain an inquiry result;
the request cases are prioritized according to the query result, and an adjusted prioritized list is obtained;
and carrying out case scheduling on the request cases according to the adjusted priority ordering list until the processing capacity of the downstream system is recovered and then releasing the bin processing operation.
Optionally, after the step of releasing the bin processing operation after the processing capability of the downstream system is recovered, the method further includes:
after the bin processing operation is released, recovering the cases marked as bin priorities in the adjusted priority ranking list to the original priorities marked as medium and low priorities in the priority ranking list before adjustment;
if the message backlog situation of the high-priority Kafka channel does not completely subside, marking the high-priority case as an emergency priority, and switching the high-priority case of the subsequent piece to the emergency-priority Kafka channel for processing until the message backlog situation of the high-priority channel subsides, switching the high-priority case back to the high-priority Kafka channel for processing.
In addition, in order to achieve the above object, the present invention also proposes a case scheduling device, which includes a memory, a processor, and a case scheduler stored on the memory and operable on the processor, the case scheduler being configured to implement the steps of case scheduling as described above.
In addition, in order to achieve the above object, the present invention also proposes a storage medium having a case scheduler stored thereon, which when executed by a processor, implements the steps of the case scheduling method as described above.
In addition, in order to achieve the above object, the present invention also provides a case scheduling device, including:
the priority matching module is further used for performing priority matching on the target request case according to case attribute information corresponding to the target request case, and performing priority marking on the target request case according to a priority matching result to obtain a priority marking result;
the flow control module is used for selecting a target flow control strategy from preset flow control strategies according to system load information corresponding to the preset arrangement service cluster;
the flow control module is further used for automatically adjusting the case starting parameters of the target request case according to the target flow control strategy to obtain target case starting parameter information;
the case starting processing module is used for sequencing the target request cases according to the priority marking result to obtain a case priority sequencing list;
and the channel scheduling module is used for calling a preset Kafka channel according to the target case starting parameter information and the case priority ordering list, and performing case scheduling on the target request case according to the preset Kafka channel and the preset arrangement service cluster.
According to the method, priority matching is carried out on the target request case according to case attribute information corresponding to the target request case, priority marking is carried out on the target request case according to a priority matching result, and a priority marking result is obtained; selecting a target flow control strategy from preset flow control strategies according to system load information corresponding to a preset arrangement service cluster; automatically adjusting the case starting parameters of the target request case according to the priority marking result and the target flow control strategy to obtain target case starting parameter information; sorting the target request cases according to the priority marking result to obtain a case priority sorting list; according to the method, priority matching is carried out on case attribute information corresponding to a target request case, case starting operation is completed according to a priority matching result and a screened target flow control strategy, and corresponding Kafka channels are set to complete case scheduling.
Drawings
FIG. 1 is a schematic diagram of a case scheduling device of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart of a case scheduling method according to a first embodiment of the present invention;
FIG. 3 is a schematic diagram of a case priority marking process according to a first embodiment of the case scheduling method of the present invention;
FIG. 4 is a schematic diagram of a global parameter data center storage structure according to a first embodiment of the case scheduling method of the present invention;
FIG. 5 is a schematic flow chart of an automated flow control process according to a second embodiment of the case scheduling method of the present invention;
FIG. 6 is a flowchart of a case scheduling method according to a third embodiment of the present invention;
FIG. 7 is a schematic diagram of an overall system module according to a third embodiment of the case scheduling method of the present invention;
FIG. 8 is a schematic diagram of a case start processing flow according to a third embodiment of the case scheduling method of the present invention;
FIG. 9 is a schematic diagram illustrating a priority matching policy setting according to a third embodiment of the case scheduling method of the present invention;
fig. 10 is a block diagram illustrating a first embodiment of a case scheduler according to the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, fig. 1 is a schematic diagram of a case scheduling device of a hardware running environment according to an embodiment of the present invention.
As shown in fig. 1, the case scheduling apparatus may include: a processor 1001, such as a central processing unit (Central Processing Unit, CPU), a communication bus 1002, a user interface 1003, a network interface 1004, a memory 1005. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display (Display), and the optional user interface 1003 may also include a standard wired interface, a wireless interface, and the wired interface for the user interface 1003 may be a USB interface in the present invention. The network interface 1004 may optionally include a standard wired interface, a Wireless interface (e.g., a Wireless-Fidelity (Wi-Fi) interface). The Memory 1005 may be a high-speed random access Memory (Random Access Memory, RAM) or a stable Memory (NVM), such as a disk Memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Those skilled in the art will appreciate that the structure shown in fig. 1 does not constitute a limitation of the case scheduling apparatus, and may include more or fewer components than shown, or may combine certain components, or may be a different arrangement of components.
As shown in FIG. 1, an operating system, a network communication module, a user interface module, and a case scheduler may be included in memory 1005, which is considered a type of computer storage medium.
In the case scheduling device shown in fig. 1, the network interface 1004 is mainly used for connecting to a background server, and performing data communication with the background server; the user interface 1003 is mainly used for connecting user equipment; the case scheduling device calls a case scheduling program stored in the memory 1005 through the processor 1001 and executes the case scheduling method provided by the embodiment of the present invention.
Based on the hardware structure, the embodiment of the case scheduling method is provided.
Referring to fig. 2, fig. 2 is a flowchart illustrating a first embodiment of a case scheduling method according to the present invention.
In this embodiment, the case scheduling method includes the following steps:
step S10: and carrying out priority matching on the target request case according to case attribute information corresponding to the target request case, and carrying out priority marking on the target request case according to a priority matching result to obtain a priority marking result.
It should be noted that, the execution body in this embodiment may be a device having a case scheduling system, for example: the server, the smart phone, the notebook, the computer, the tablet, etc. may be other case scheduling devices capable of implementing the same or similar functions, which is not limited in this embodiment. This embodiment and the following embodiments will be described herein by taking the above-described computer as an example. The case scheduling system provided by the scheme has an automatic case scheduling flow, does not need professional examination, and can automatically complete case scheduling. In this embodiment and the following embodiments, a case scheduling method according to the present invention will be described by taking a computer as an example.
The method comprises the steps of constructing an automatic flow control model by combining the characteristics of case type, staged centralized flow, service classification and the like, and embedding the automatic flow control model into a case scheduling system based on the automatic flow control model, so that the service request is subjected to self-defined classification processing, wherein the case scheduling system mainly comprises two parts of comprehensive service and an arrangement service cluster, and the comprehensive service mainly comprises five modules: the system comprises a priority matching strategy setting module, a case priority marking module, a global parameter data center module, a case starting processing module and a flow automation control module. For example: the method comprises the steps that a case request summary entry aiming at scenes such as credit, quota adjustment and the like enters comprehensive service through a case Kafka message, a case priority marking module obtains a case priority matching strategy from a global parameter data center and marks priorities on the cases in real time, a case starting processing module sorts marked cases and then starts case processing according to the order of the priorities from high to low, and the case starting processing module interacts with an arrangement service cluster in an asynchronous mode through a Kafka message intermediate. The arrangement service cluster comprises a plurality of arrangement service nodes, and each service node monitors the message of each priority Kafka channel and processes the case in real time.
It should be understood that the target request case may be a request case of a pointer to each service request type corresponding to each service scenario in the internet financial scenario, for example: the service request in the authorization scene, the service request in the credit verification scene, or the service request in the adjustment credit scene. In the scheme, priority matching is performed on the target request case through case attribute information corresponding to the target request case, namely, priority matching is performed on the target request case through the scene type, the product type and the channel type contained in the case attribute, and the priority corresponding to the target request case is determined. The priority matching result refers to a priority level corresponding to each target request case, and the priority level may be divided into: the priority marking results are based on the priority marking results corresponding to all target request cases, wherein the priority marking results comprise priority matching marks corresponding to all request cases, for example: the 10 requests contain 4 emergency priorities, 3 high priorities, 2 medium priorities and 1 low priority marked cases.
For further explanation of the flow of priority marking in this solution, reference may be made to a case priority marking flow schematic diagram shown in fig. 3, where when a case processing request is received, a priority matching policy is obtained from a global parameter data center, and then a real-time priority marking is performed on a currently requested case according to the priority matching policy, a case request record is dumped in a case priority ranking table of the global parameter data center module, and the case priority ranking table is stored in the global parameter data center. The case priority ordering list comprises a case priority ordering list and a priority matching strategy, wherein the priority matching strategy refers to a strategy for carrying out priority matching on a target request case according to case attribute information corresponding to the target request case, the marking mode can be that a priority matching result determined by the priority matching strategy is used for marking the priority of the target request case, so that a priority marking result is obtained, and the case priority ordering list corresponding to the target request case can be determined according to the priority marking result, so that the case starting process of the later stage of the completion of the record stored in the case priority ordering list can be conveniently queried from the global parameter data center in the later stage. The global parameter data center may refer to a global parameter data center storage structure schematic diagram shown in fig. 4, where the global parameter data center includes a parameter center and a data center, the parameter center includes a priority matching policy, a case starting rate, a number of times of case starting processing failure retries, and the like, and the data center may include a case priority ordering table and other data recording tables.
It can be understood that the global parameter data center module, also referred to as a central storage module, is configured to store global shared parameters and data between modules of the system, such as: priority matching strategy, case starting rate, case starting processing failure retry times, case priority ordering data information and the like, and each module in the system can read or modify corresponding parameters and data information in real time. The global parameter data center module is used for storing the policy parameters and the case request records, and is also used for storing the case starting rate, the maximum case starting rate, the minimum case starting rate, the number of times of failed retry of case starting processing and the like.
It should be understood that the functions of the modules and the relationships between the modules in the system mentioned in this solution are described as follows: the priority matching policy setting module is configured to set a matching policy of case priorities in a visual manner, where the case priorities are classified into urgent, high, medium, low and bin priorities, and matching attributes of the priorities include scene types, products, channels, and the like, and a system user can customize the priorities of case matching according to different case attributes, for example: and matching off-line large-amount products of the self-owned APP channel in the credit giving scene into high priority. The set matching strategy exists in the global parameter data center module and can be effective in real time. The case priority matching marking module is used for marking the priority of the case in real time according to the priority matching strategy and storing the case in the case priority sorting table.
In the specific implementation, priority matching is performed on the target request case according to case attribute information corresponding to the target request case, priority marking is performed on the target request case according to a priority matching result, and a priority marking result is obtained.
Step S20: and selecting a target flow control strategy from the preset flow control strategies according to the system load information corresponding to the preset arrangement service cluster.
The preset orchestration service cluster may be a preset service cluster for performing request processing, where the service cluster may be a cluster formed by a plurality of service nodes, and the service nodes may refer to computer devices. Wherein the system load information refers to the running state and load information of the service nodes in the orchestration service cluster, and because the service cluster in the scheme is a distributed service cluster, the running state corresponding to each service node is different, in addition, because the cases processed by each service node are different, the corresponding load information is also different, and the flow control strategy suitable for the service cluster and the target request case can be better determined by combining the system load information corresponding to the preset arrangement service cluster,
Further, in order to improve the system processing efficiency, the system running state of the service nodes in the preset arrangement service cluster can be monitored in real time, and then a target flow control strategy is determined from the preset flow control strategy according to the collected load information, so that the automation and differential control of case flow are realized; the preset flow control strategy comprises a full-open flow control grade strategy, a conventional flow control grade strategy and a secondary fusing flow control grade strategy; the step S20 further includes: collecting system load information corresponding to a preset arrangement service cluster in real time, wherein the system load information comprises load index information corresponding to case processing throughput rate, CPU (central processing unit) utilization rate, memory utilization rate and case average processing time length; determining the load grade corresponding to each service node in the preset arrangement service cluster according to the load index information; and selecting a target flow control strategy from the full-open flow control level strategy, the conventional flow control level strategy and the secondary fusing flow control level strategy according to the load levels corresponding to the service nodes.
It should be noted that, by collecting system load information corresponding to each service node in the orchestration service cluster, where the system load information includes load index information corresponding to a case processing throughput rate, a CPU usage rate, a memory usage rate, and a case average processing duration, a flow control policy corresponding to a matching level of the load index information is selected in real time according to the load index information. The load level corresponding to each service node is determined through the load index information, the load index information is compared with the load index corresponding to the preset level, the load level corresponding to each service node is determined according to the compared result of the index of the activity after comparison, and the load level can comprise three levels of serious load, high load and light load, so that a target flow control strategy is selected from a full-open flow control level strategy, a conventional flow control level strategy and a secondary fusing flow control level strategy according to the load level.
It should be understood that the running state of the service node and the system load condition can be monitored in real time through the flow automation control module, and the corresponding flow control strategy is intelligently selected according to the running state and the system load condition, so that the automation and differential control of the case flow are realized. When the machine load corresponding to the service node is light and the system resources are sufficient, a full-open flow control level strategy is selected, namely, flow control is not performed, so that the system runs at full load, and the system resources are utilized to the greatest extent to enable the system to reach the highest case processing throughput rate; when the machine load corresponding to the service node is higher, a conventional flow control level strategy is selected, at the moment, flow control is needed, the case starting rate is automatically adjusted in real time, and the case processing throughput rate of the system is guaranteed to be improved as much as possible under a healthier state; when the system load pressure corresponding to the service node is serious, the processing performance is reduced, a large number of case requests are accumulated and can not be processed in time, a 'secondary fusing flow control level strategy' is selected at the moment, the case starting speed is adjusted to be the minimum value, the processing of the high-priority case requests is ensured, and the usability of the system is ensured.
Step S30: and automatically adjusting the case starting parameters of the target request case according to the target flow control strategy to obtain target case starting parameter information.
It should be noted that the target case starting parameter information includes parameter information such as case starting rate, number of times of case starting failure retries, and number of cases starting. The case starting processing module obtains the cases marked by the case priority matching marking module from the case priority sorting table, and performs real-time sorting according to the priority from high to low to obtain the case starting parameter information of the global parameter data center module, for example: parameters such as a case starting rate, a case starting processing failure retry number and the like, and a corresponding number of cases are obtained periodically to perform case starting processing.
Step S40: and ordering the target request cases according to the priority marking result to obtain a case priority ordering list.
It should be noted that, the case priority ranking list is a case scheduling ranking list generated after the target request case with the marked priority is ranked according to the result of the priority marking, the ranking mode can be sequentially ranked according to five priority orders corresponding to emergency, high, medium, low and bin, and the case priority ranking list is generated according to the ranked case order, so that the case scheduling and processing can be completed according to the case priority ranking list in the later stage.
Step S50: and calling a preset Kafka channel according to the target case starting parameter information and the case priority ranking list, and performing case scheduling on the target request case according to the preset Kafka channel and the preset scheduling service cluster.
It should be noted that, the preset Kafka channel is a data channel preset for transmitting data, where the Kafka channel can determine a corresponding channel according to a priority marking result, and Kafka is a distributed Message Queue (Message Queue) based on a publish/subscribe mode, and is mainly applied to the field of big data real-time processing. Compared with the traditional application scene of the message queue, the data transmission can not be completed in a one-to-many mode through processing step by step in sequence, so that the scheme interacts with the arranging service cluster in an asynchronous mode through Kafka message middleware. The arrangement service cluster comprises a plurality of arrangement service nodes, and each service node monitors the message of each priority Kafka channel and processes the case in real time.
It can be understood that in the scheme, cases in different service scenes (such as scenes of credit, quota adjustment and the like) can be requested to be summarized through the case Kafka message and enter the comprehensive service part, after the case priority matching strategy is obtained from the global parameter data center by the case priority marking module, the case is marked with priority in real time, then after the marked cases are ordered by the case starting processing module, the case starting processing is carried out on the cases according to the order of priority from high to low by the case starting processing module, and the module interacts with the scheduling service cluster in an asynchronous mode through the Kafka message middleware. The arrangement service cluster comprises a plurality of arrangement service nodes, each service node monitors the message of each priority Kafka channel, and performs scheduling processing on the cases in real time.
According to the embodiment, priority matching is carried out on the target request case according to case attribute information corresponding to the target request case, priority marking is carried out on the target request case according to a priority matching result, and a priority marking result is obtained; selecting a target flow control strategy from preset flow control strategies according to system load information corresponding to a preset arrangement service cluster; automatically adjusting the case starting parameters of the target request case according to the target flow control strategy to obtain target case starting parameter information; sorting the target request cases according to the priority marking result to obtain a case priority sorting list; according to the embodiment, priority matching is performed through case attribute information corresponding to a target request case, case starting operation is completed according to a priority matching result and a screened target flow control strategy, and corresponding Kafka channels are set to complete case scheduling.
Based on the first embodiment shown in fig. 2, a second embodiment of the case scheduling method of the present invention is proposed.
In this embodiment, the step S30 further includes: if the target flow control strategy is a full-open flow control grade strategy, adjusting the case starting rate in the case starting parameters of the target request case to be the maximum value, and obtaining a first case starting parameter; if the target flow control strategy is a conventional flow control grade strategy, dynamically adjusting the case starting rate in the case starting parameters of the target request case according to the actual load condition of the system to obtain a second case starting parameter; if the target flow control strategy is a secondary fusing flow control level strategy, the case starting rate in the case starting parameters of the target request case is adjusted to be the minimum value, and a third case starting parameter is obtained; and determining target case starting parameter information according to the first case starting parameter, the second case starting parameter and the third case starting parameter.
It should be noted that, if the target flow control strategy is a full-open flow control level strategy, the case starting rate is adjusted to the maximum value, and a first case starting parameter is obtained; if the target flow control strategy is a conventional flow control grade strategy, processing the high-priority cases of the original stockup cases and the subsequent cases, and dynamically adjusting the case starting rate according to the actual load condition to obtain a second case starting parameter; and if the target flow control strategy is a secondary fusing flow control level strategy, adjusting the case starting speed to be the minimum value to obtain a third case starting parameter. Therefore, the scheme adjusts the case starting parameters corresponding to the target case according to the fully-opened flow control level strategy, the conventional flow control level strategy and the secondary fusing flow control level strategy to obtain target case starting parameter information.
It can be understood that, to further illustrate the flow automation control flow diagram shown in fig. 5 by dividing into three flow control level strategies according to the flow restriction degree in the present solution; referring to fig. 5, the system dynamically selects a corresponding flow control level policy to execute according to the load condition of the service cluster machine, and automatically adjusts the case start rate. The hierarchical policy selection logic (where S represents the flow control hierarchical policy selected by the system, CU (CPU usage) represents CPU usage, MU (Memory usage) represents memory usage) is as follows:
it can be understood that in the scheme, the target flow control strategy is selected from the preset flow control strategies through the system load information corresponding to the preset arrangement service cluster, so that arrangement service is acquired in real timeSystem load information of the service cluster, and determining a grade corresponding to the system load information according to a grade selection policy, for example:judging that the current load is lighter; />Judging that the current load is higher;and judging that the current load is serious. And selecting a target flow control strategy from the full-open flow control level strategy, the conventional flow control level strategy and the secondary fusing flow control level strategy according to the corresponding load level. Wherein (1) a fully open flow control level strategy: the service cluster machine has lighter load and sufficient resources, and at the moment, the flow is fully opened, the flow control is not performed, so that the system runs at full load, and the system resources are utilized to the greatest extent, so that the system achieves the highest case processing throughput rate.
(2) Conventional flow control level policy: the service cluster machine has higher load, the case average processing time length has larger fluctuation, at the moment, the flow control is needed, the case starting speed is adjusted in real time according to the load condition of the system, and the case processing throughput rate of the system is ensured to be improved as much as possible under a healthier state. In this policy mode, if cases marked as bin priority are detected, the system will automatically release the bin for those cases. And, carry on the automatic congestion control to the flow, namely when the case processes the throughput rate to rise than the previous cycle in the current cycle, promote the case and start the case rate in order to further promote the case and process the throughput rate (add up), if the next cycle is because of promoting the case and start the case rate and leading to the system to process the case to appear to jam (the case processes the throughput rate to decline), then reduce the case and start the case rate (multiply and reduce). Let the case starting rate be V, and the minimum case starting rate beThe current periodic case processing throughput rate is CTPS, and the last periodic case processing throughput rate is LTPS, the case starting rate is automatically increased orThe reduced calculation logic is as follows:
(3) Secondary fuse flow control level policy: the service cluster system has serious load pressure, processing performance is reduced, a large number of case requests are accumulated and cannot be processed in time, a secondary fusing flow control mode is started at the moment, the system matches the medium-priority case setting and the low-priority case setting into the case storing priority, the case starting speed is adjusted to be the minimum value, the high-priority case requests are preferentially ensured to be processed, and the usability of the system is ensured.
The invention will be further described in connection with specific embodiments in a specific implementation. And simultaneously processing the high, medium and low priority cases aiming at the scenes. Assume that the system needs to process the following three cases simultaneously: (1) from the under-line channel large amount credit case (C1): is a large-limit product and is an off-line channel, and has high timeliness requirement and needs to be processed according to high priority; (2) from the channel of the amount credit cases (C2): for medium-limit products, the time efficiency requirements are generally treated according to medium priority; (3) self-nutrient channel forehead adjustment case (C3): for a large number of cases in the loan, the timeliness requirement is low, and the cases are processed according to low priority. The case entering speeds of the C1, C2 and C3 types are respectively 1TPS, 4TPS and 30TPS, and the optimal number of cases which can be processed by the system per second is 10. The following policy, parameter initialization setup steps are performed before the case is processed. Step one: setting a case priority matching strategy. And C1, C2 and C3 type cases are respectively set to be processed according to high, medium, low and priority according to the case types, product numbers and channel numbers. Step two: and initializing and setting related system parameters of the global parameter data center. Because the case C1 in-feed rate of high priority is 1TPS, in order to ensure that the case C1 in-feed rate can be processed timely at any moment, the minimum case starting rate of the system is set to be 1TPS, and the case starting rate and the maximum case starting rate are both set to be 99TPS (the sum of the case in-feed rates is exceeded, namely, the flow limitation is not carried out). After the initialization configuration is carried out through the steps, the system starts to process the cases. The method comprises the following steps: firstly, the C1, C2 and C3 type cases are marked as high, medium and low priorities after entering the priority marking module, and the case request records are stored in a case priority ordering table of the parameter data center. The first round of case start-up processing period task starts to run, and the system starts up the case processing for all cases of C1, C2 and C3 types. At this time, the flow automation control module starts to operate, the load information of the first wheel system is collected to obtain that the utilization rate of the system CPU and the memory is 25% and 30% respectively, a full-open flow control level strategy is selected, and the case starting rate is unchanged (still is the maximum case starting rate). In the next several rounds of task cycles, the utilization rate of the system CPU and the memory collected by the flow control module still does not exceed the threshold value of 45%, and the system still starts the case processing for all the cases of the C1, C2 and C3 types together. After the system executes the fully-opened flow control level strategy for 10 minutes to start the case, the utilization rates of the system CPU and the memory collected by the flow control module are 55% and 60% (the utilization rates exceed the threshold of 45%) respectively, the conventional flow control level strategy is immediately selected to be executed, the case flow is controlled, the current time of the system is set to be 10:00, the automatic flow control task period is 1 minute, and the flow control time line of the system is shown in the following table:
And as shown in a time line of system operation of the table, the system periodically collects load information such as CPU utilization rate and the like, automatically selects a corresponding flow control level strategy for execution according to the range of the load information, and automatically adjusts the case starting rate. When the load of the system continuously rises and the message backlog occurs, an algorithm for reducing the case starting rate by multiplication is used, or a secondary fusing flow control level strategy is executed to adjust the case starting rate to be the minimum value, and on the premise of ensuring that the high-priority cases are processed in time, the system load is rapidly reduced so as to ensure the stability and the usability of the system; and when the system load is restored to a healthy state (the CPU and memory utilization rate is lower than 45%), executing a full-open flow control level strategy so as to maximally utilize the system resources and improve the throughput rate of the system for processing cases.
According to the embodiment, priority matching is carried out on the target request case according to case attribute information corresponding to the target request case, priority marking is carried out on the target request case according to a priority matching result, and a priority marking result is obtained; selecting a target flow control strategy from preset flow control strategies according to system load information corresponding to a preset arrangement service cluster; sorting the target request cases according to the priority marking result to obtain a case priority sorting list; selecting a target case to be processed in a preset period from the target request cases according to the case priority ranking list and the preset period processing capacity; according to the full-open flow control level strategy, the conventional flow control level strategy and the secondary fusing flow control level strategy, the case starting parameters corresponding to the target case are adjusted, and target case starting parameter information is obtained; according to the embodiment, priority matching is carried out through case attribute information corresponding to the target request case, case starting operation is completed according to a priority matching result and a screened target flow control strategy, and corresponding Kafka channels are set to complete case scheduling.
Referring to fig. 6, fig. 6 is a flowchart illustrating a second embodiment of the case scheduling method according to the present invention, and a third embodiment of the case scheduling method according to the present invention is proposed based on the first embodiment shown in fig. 2.
In this embodiment, the step S50 further includes:
step S501: and selecting a target case to be processed in a preset period and a case priority ordering list corresponding to the target case from the target request cases according to the case priority ordering list and the preset period processing capacity.
It should be noted that the preset period processing amount may be the number of treatable cases corresponding to the case processing period, for example: the 10 cases were handled for 1 second. The case priority ranking table and the preset period process that the number of cases which can be processed is selected from the target request cases and is used for later scheduling, for example: the case priority ranking table comprises 100 cases which are sequentially ranked according to priorities corresponding to emergency, high, medium and low cases and case storage, 10 cases which can be processed within 1 second of a preset period are needed to be selected, and therefore 10 target cases are extracted from the 100 cases ranked in the priority ranking table according to the preset period. The target case refers to a case which is acquired from the target request case and is subjected to priority sorting.
In the specific implementation, the case priority sorting list corresponding to the target case is obtained by sorting the priority marking result of the obtained target case in the case priority sorting list and sorting the priority of the target case according to the priority marking result.
Step S502: and acquiring the case starting parameter information corresponding to the target case from the target case starting parameter information.
It should be noted that, in order to ensure the processing efficiency of the system, the cases marked by the case priority matching marking module are obtained, the cases are sorted in real time from high to low according to the priority, the target cases to be processed are obtained from the sorted target request cases according to the preset period, the target flow control strategy is selected from the full-open flow control level strategy, the conventional flow control level strategy and the secondary fusing flow control level strategy according to the load level, the case starting parameters are obtained from the global parameter data center module, the case starting parameters corresponding to the target cases are adjusted in real time, and the case starting processing is performed according to the adjusted case starting parameters.
Step S503: calling a preset Kafka channel according to the case starting parameter information corresponding to the target case and the case priority ranking list corresponding to the target case, and scheduling the target case according to the preset Kafka channel and the preset scheduling service cluster.
It should be noted that, the preset Kafka channel includes a Kafka channel corresponding to an emergency priority, a high priority, a medium priority and a low priority;
further, the step S503 further includes: selecting a target Kafka channel from the Kafka channels corresponding to the emergency priority, the high priority, the medium priority and the low priority according to a priority marking result in the case priority sorting list corresponding to the target case; and carrying out case scheduling on the target case according to the target Kafka channel and the preset scheduling service node.
It should be noted that, in order to ensure the case processing efficiency, in this scheme, by constructing a plurality of Kafka channels in advance, including Kafka channels corresponding to an emergency priority, a high priority, a medium priority and a low priority, in this scheme, by constructing Kafka channels matched with a priority marking result in advance, message backlog caused by case accumulation can be avoided, and compared with transmission in the existing scheme one by one according to the order of case sequencing, this scheme can realize synchronous transmission of a plurality of messages to a plurality of channels, thereby improving the data processing efficiency.
It should be understood that, reference may be made to the overall system module schematic diagram shown in fig. 7, where the case processing request is sent to the corresponding priority Kafka channel according to the priority identifier, so that each service node of the downstream orchestration service cluster performs approval decision processing. And the orchestration service cluster flexibly expands or contracts the cluster in a containerized manner based on the cloud native technology. For example: cases with priorities marked as emergency priorities are sent to an emergency Kafka channel, cases with priorities marked as high priorities are sent to a high-priority Kafka channel, cases with priorities marked as medium priorities are sent to a medium-priority Kafka channel, and cases with priorities marked as low priorities are sent to a low-priority Kafka channel.
It can be understood that, referring to the case start-up processing flow chart shown in fig. 8, parameters of "case start rate" and "number of times of failed start-up processing retries" of the global parameter data center module can be obtained in real time, cases marked as urgent, high, medium and low priorities (except for case priorities) are periodically sorted from the case priority sorting table according to the corresponding rates, the cases are sorted from high to low according to the priorities, a corresponding number of cases are obtained, and case processing requests are sent to corresponding priority Kafka channels according to priority identifiers, so that each service node of the downstream orchestration service cluster can perform approval decision processing. And the orchestration service cluster flexibly expands or contracts the cluster in a containerized manner based on the cloud native technology.
In the specific implementation, a case starting parameter is obtained from a global parameter data center, case records except for the case priority are queried from a case priority ranking table according to parameters such as a case starting rate, processing failure retry times and the like contained in the case starting parameter and a case starting task period, cases in the case records obtained by query are ranked from high to low according to the priority, and the cases after ranking are sent to a priority Kafka channel corresponding to the priority.
In this embodiment, after the step S50, the method further includes: acquiring the number of messages to be processed of a preset Kafka message channel; judging whether to start the bin processing operation according to the quantity of the to-be-processed messages of the Kafka message channel to obtain a judging result; according to the judging result, carrying out bin storage processing on the target request cases, and carrying out bin storage priority marking on the cases needing bin storage to obtain a bin storage priority marking result; inquiring the request cases except the cases with the bin priority identifiers from a case priority ranking table according to the bin priority marking result to obtain an inquiry result; the request cases are prioritized according to the query result, and an adjusted prioritized list is obtained; and carrying out case scheduling on the request cases according to the adjusted priority ordering list until the processing capacity of the downstream system is recovered and then releasing the bin processing operation.
It should be noted that, in order to avoid the reduction of processing capacity caused by the failure of the downstream system, and further cause the serious backlog of the high-priority Kafka channel, the present scheme also relates to the operation of the case backlog problem, in the present scheme, the number of the messages to be processed of the Kafka message channel is preset by obtaining, where the number of the messages to be processed of the Kafka message channel may be the number of the unprocessed messages of the system that the system continuously performs the case starting processing on the case, but the downstream system does not normally process in time, so that the message backlog occurs. Determining whether to initiate bin handling operations by determining whether the number exceeds a preset threshold to reduce downstream system pressure. The judging results comprise two judging results of opening and non-opening, and when the quantity of the to-be-processed messages in the Kafka message channel does not exceed a preset threshold value, the non-opening of the bin processing operation is judged.
It can be understood that when the number of the messages to be processed in the Kafka message channel exceeds a preset threshold, judging to start the bin processing operation, and performing bin processing on all subsequent request cases, namely marking the cases of the subsequent request as bin priority, and suspending the bin processing on the case system marked as bin priority; inquiring the request cases except the case with the bin priority mark from a case priority ordering table (the table refers to a priority ordering list corresponding to each target request case before bin processing) to obtain an inquiry result; the query result refers to cases other than the bin priority mark. The cases except the bin priority mark are subjected to priority ranking, and an adjusted priority ranking list is obtained; and carrying out case scheduling on the request cases according to the adjusted priority ordering list until the processing capacity of the downstream system is recovered, and releasing the bin processing operation. The cases under different scenes, products or channels need to be divided into different priorities, so that the case starting processing module and the flow automation control module can conduct differentiated processing conveniently. The case priority system is divided into five priorities of emergency, high, medium, low and bin sequentially from high to low. In general, the cases of a large number of scenes in a loan are set to be low in priority, the large products of a part of scenes before the loan are set to be high in priority, most of the other types of cases are set to be medium in priority by default, and the emergency priority is used for matching the case settings of the corresponding types to the emergency priority when serious congestion occurs in a certain time of a certain priority channel of high, medium and low. In addition, when problems occur in downstream services such as arranging a service cluster and the like, and some types of cases cannot be normally processed, in order to avoid further aggravating the downstream service problems, the corresponding types of cases can be set and matched to be stored as the case priority, and after the case priority is set, the case starting processing module does not perform case starting processing on the corresponding cases; when the downstream service is recovered to be normal, the corresponding type of case is set and matched to be the original priority, and the case starting processing module recovers to perform case starting processing on the corresponding case. In order to further explain the priority matching policy setting for the bin processing scenario in the present embodiment, reference may be made to a priority matching policy setting schematic diagram shown in fig. 9, where in the bin processing scenario, since the system is still performing the bin processing on the bin, but the processing capability of the downstream system is insufficient to process the normal request of the approval decision system, in order to reduce the pressure of the downstream system, all the bins need to be processed at this time, and after the processing capability of the downstream system is recovered, the bins need to be released for the bin processing. Setting a case priority matching strategy. Setting all the case priorities as bin priorities, the subsequent system will not continue to start the case, but only continue to process the stock case request records of the earlier started cases of the priority Kafka channels.
Further, the step of scheduling the request case according to the adjusted priority ranking list until the processing capability of the downstream system is restored and then releasing the bin processing operation further includes: after the bin processing operation is released, recovering the cases marked as bin priorities in the adjusted priority ranking list to the original priorities marked as medium and low priorities in the priority ranking list before adjustment; if the message backlog situation of the high-priority Kafka channel does not completely subside, marking the high-priority case as an emergency priority, and switching the high-priority case of the subsequent piece to the emergency-priority Kafka channel for processing until the message backlog situation of the high-priority channel subsides, switching the high-priority case back to the high-priority Kafka channel for processing.
In a specific implementation, in order to further explain the operation of the bin release processing in the scheme, the bin release processing is described with reference to fig. 9, and after the processing capability of the downstream system is recovered to be normal, the case recovery with the original set matching the medium and low priorities is set to be the original priority. If the message backlog condition of the high-priority Kafka channel is not completely resolved, in order to ensure that the high-priority cases of the follow-up cases are processed in time, the high-priority cases are set to be emergency priorities, and then the high-priority cases of the follow-up cases are switched to the emergency-priority Kafka channel for processing. After the backlog of the high-priority channel message is eliminated, the high-priority channel is switched back to be processed by using the high-priority Kafka channel, so that the emergency priority Kafka channel is in a smooth health state when the situation occurs later. The scheme has the characteristics of strong flexibility, high timeliness, hierarchical processing, intelligent perception adjustment and the like. Aiming at the characteristics of multiple case types, service classification and the like of approval decisions, the scheme can flexibly customize the processing priority based on multiple attributes of the cases, breaks through the conventional sequential processing mode, can preferentially process high-priority cases at any time, solves the problem that the high-priority cases cannot be timely processed when the system is jammed and backlogged, and improves the timeliness and the agility of the system. In addition, because the system needs to process the cases of the types such as the adjustment of the amount and the adjustment of the price in a large batch of credit in a staged manner, in order not to influence the timeliness and the stability of the processing of the cases with high priority in the scenes such as credit giving, credit using and the like, when the system load fluctuates, the system can automatically control the case processing rate, ensure the preferential processing of the cases with high priority, and simultaneously furthest improve the throughput rate of the cases processed by the system, thereby further improving the timeliness of the cases with low priority such as the adjustment of the amount and the adjustment of the price in the large batch of credit, and ensuring the stable and rapid development of the business.
According to the embodiment, priority matching is carried out on the target request case according to case attribute information corresponding to the target request case, priority marking is carried out on the target request case according to a priority matching result, and a priority marking result is obtained; selecting a target flow control strategy from preset flow control strategies according to system load information corresponding to a preset arrangement service cluster; automatically adjusting the case starting parameters of the target request case according to the target flow control strategy to obtain target case starting parameter information; sorting the target request cases according to the priority marking result to obtain a case priority sorting list; selecting a target case to be processed in a preset period and a case priority ordering list corresponding to the target case from the target request cases according to the case priority ordering list and the preset period processing capacity; acquiring case starting parameter information corresponding to the target case from the target case starting parameter information; according to the case starting parameter information corresponding to the target case and the case priority ordering list corresponding to the target case, a preset Kafka channel is called, case scheduling is conducted on the target case according to the preset Kafka channel and the preset arrangement service cluster, priority matching is conducted on case attribute information corresponding to the target request case, case starting operation is completed according to a priority matching result and a screened target flow control strategy, the corresponding Kafka channel is set to complete case scheduling, compared with the existing approval decision flow control scheme, the problem that the timeliness and stability of case processing are poor due to the fact that the corresponding control cannot be conducted by combining a service scene and a case type, the problem that the timeliness and stability of case processing are poor is caused is solved.
In addition, in order to achieve the above object, the present invention also proposes a storage medium having a case scheduler stored thereon, which when executed by a processor, implements the steps of the case scheduling method as described above.
Referring to fig. 10, fig. 10 is a block diagram showing the structure of a first embodiment of the case scheduling apparatus of the present invention.
As shown in fig. 10, a case scheduling device provided by an embodiment of the present invention includes:
the priority matching module 10 is further configured to perform priority matching on the target request case according to case attribute information corresponding to the target request case, and perform priority marking on the target request case according to a priority matching result, so as to obtain a priority marking result;
the flow control module 20 is further configured to automatically adjust a case starting parameter of the target request case according to the target flow control policy, so as to obtain target case starting parameter information;
the case starting processing module 30 is configured to sort the target request cases according to the priority marking result, so as to obtain a case priority sorting list;
and the channel scheduling module 40 is configured to call a preset Kafka channel according to the target case starting parameter information and the case priority ranking list, and perform case scheduling on the target request case according to the preset Kafka channel and the preset scheduling service cluster.
According to the embodiment, priority matching is carried out on the target request case according to case attribute information corresponding to the target request case, priority marking is carried out on the target request case according to a priority matching result, and a priority marking result is obtained; selecting a target flow control strategy from preset flow control strategies according to system load information corresponding to a preset arrangement service cluster; automatically adjusting the case starting parameters of the target request case according to the target flow control strategy to obtain target case starting parameter information; sorting the target request cases according to the priority marking result to obtain a case priority sorting list; according to the embodiment, priority matching is performed through case attribute information corresponding to a target request case, case starting operation is completed according to a priority matching result and a screened target flow control strategy, and corresponding Kafka channels are set to complete case scheduling.
Further, the preset flow control strategy comprises a full-open flow control grade strategy, a conventional flow control grade strategy and a secondary fusing flow control grade strategy; the flow control module 20 is further configured to collect, in real time, system load information corresponding to a preset orchestration service cluster, where the system load information includes load index information corresponding to a case processing throughput rate, a CPU utilization rate, a memory utilization rate, and a case average processing duration; determining the load grade corresponding to each service node in the preset arrangement service cluster according to the load index information; and selecting a target flow control strategy from the full-open flow control level strategy, the conventional flow control level strategy and the secondary fusing flow control level strategy according to the load levels corresponding to the service nodes.
Further, the flow control module 20 is further configured to adjust a case starting rate in the case starting parameters of the target request case to be a maximum value if the target flow control policy is a full-open flow control level policy, so as to obtain a first case starting parameter; if the target flow control strategy is a conventional flow control grade strategy, dynamically adjusting the case starting rate in the case starting parameters of the target request case according to the actual load condition of the system to obtain a second case starting parameter; if the target flow control strategy is a secondary fusing flow control level strategy, the case starting rate in the case starting parameters of the target request case is adjusted to be the minimum value, and a third case starting parameter is obtained; and determining target case starting parameter information according to the first case starting parameter, the second case starting parameter and the third case starting parameter.
Further, the case starting processing module 30 is further configured to select, from the target request cases, a target case to be processed in a preset period and a case priority ranking list corresponding to the target case according to the case priority ranking list and a preset period throughput; acquiring case starting parameter information corresponding to the target case from the target case starting parameter information;
the channel scheduling module 40 is further configured to call a preset Kafka channel according to the case starting parameter information corresponding to the target case and the case priority ranking list corresponding to the target case, and perform case scheduling on the target case according to the preset Kafka channel and the preset scheduling service cluster.
Further, the preset Kafka channel comprises a Kafka channel corresponding to an emergency priority, a high priority, a medium priority and a low priority; the channel scheduling module 40 is further configured to select a target Kafka channel from the Kafka channels corresponding to the emergency priority, the high priority, the medium priority and the low priority according to a priority marking result in the case priority ranking list corresponding to the target case; and carrying out case scheduling on the target case according to the target Kafka channel and the preset scheduling service node.
Further, the channel scheduling module 40 is further configured to obtain a number of messages to be processed in a preset Kafka message channel; judging whether to start the bin processing operation according to the quantity of the to-be-processed messages of the Kafka message channel to obtain a judging result; according to the judging result, carrying out bin storage processing on the target request cases, and carrying out bin storage priority marking on the cases needing bin storage to obtain a bin storage priority marking result; inquiring the request cases except the cases with the bin priority identifiers from the case priority sorting list according to the bin priority marking result to obtain an inquiry result; the request cases are prioritized according to the query result, and an adjusted prioritized list is obtained; and carrying out case scheduling on the request cases according to the adjusted priority ordering list until the processing capacity of the downstream system is recovered and then releasing the bin processing operation.
Further, the channel scheduling module 40 is further configured to restore the case marked as the bin priority in the adjusted prioritized list to the original priority marked as the medium or low priority in the prioritized list before adjustment after the bin processing operation is released; if the message backlog situation of the high-priority Kafka channel does not completely subside, marking the high-priority case as an emergency priority, and switching the high-priority case of the subsequent piece to the emergency-priority Kafka channel for processing until the message backlog situation of the high-priority channel subsides, switching the high-priority case back to the high-priority Kafka channel for processing.
Other embodiments or specific implementation manners of the case scheduling device of the present invention may refer to the above method embodiments, and are not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the terms first, second, third, etc. do not denote any order, but rather the terms first, second, third, etc. are used to interpret the terms as names.
From the above description of embodiments, it will be clear to a person skilled in the art that the above embodiment method may be implemented by means of software plus a necessary general hardware platform, but may of course also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. read only memory mirror (Read Only Memory image, ROM)/random access memory (Random Access Memory, RAM), magnetic disk, optical disk), comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (9)

1. The case scheduling method is characterized by comprising the following steps of:
priority matching is carried out on the target request case according to case attribute information corresponding to the target request case, priority marking is carried out on the target request case according to a priority matching result, and a priority marking result is obtained;
selecting a target flow control strategy from preset flow control strategies according to system load information corresponding to a preset arrangement service cluster;
automatically adjusting the case starting parameters of the target request case according to the target flow control strategy to obtain target case starting parameter information;
sorting the target request cases according to the priority marking result to obtain a case priority sorting list;
calling a preset Kafka channel according to the target case starting parameter information and the case priority ranking list, and performing case scheduling on the target request case according to the preset Kafka channel and the preset scheduling service cluster;
Acquiring the number of messages to be processed of a preset Kafka message channel;
judging whether to start the bin processing operation according to the quantity of the to-be-processed messages of the Kafka message channel to obtain a judging result;
according to the judging result, carrying out bin storage processing on the target request cases, and carrying out bin storage priority marking on the cases needing bin storage to obtain a bin storage priority marking result;
inquiring the request cases except the cases with the bin priority identifiers from the case priority sorting list according to the bin priority marking result to obtain an inquiry result;
the request cases are prioritized according to the query result, and an adjusted prioritized list is obtained;
and carrying out case scheduling on the request cases according to the adjusted priority ordering list until the processing capacity of the downstream system is recovered and then releasing the bin processing operation.
2. The case scheduling method of claim 1, wherein the preset flow control policy includes a full-open flow control level policy, a regular flow control level policy, and a secondary fuse flow control level policy; the step of selecting the target flow control strategy from the preset flow control strategies according to the system load information corresponding to the preset arrangement service cluster comprises the following steps:
Collecting system load information corresponding to a preset arrangement service cluster in real time, wherein the system load information comprises load index information corresponding to case processing throughput rate, CPU (central processing unit) utilization rate, memory utilization rate and case average processing time length;
determining the load grade corresponding to each service node in the preset arrangement service cluster according to the load index information;
and selecting a target flow control strategy from the full-open flow control level strategy, the conventional flow control level strategy and the secondary fusing flow control level strategy according to the load levels corresponding to the service nodes.
3. The case scheduling method of claim 2, wherein the step of automatically adjusting the case starting parameter of the target request case according to the target flow control policy to obtain target case starting parameter information comprises the following steps:
if the target flow control strategy is a full-open flow control grade strategy, adjusting the case starting rate in the case starting parameters of the target request case to be the maximum value, and obtaining a first case starting parameter;
if the target flow control strategy is a conventional flow control grade strategy, dynamically adjusting the case starting rate in the case starting parameters of the target request case according to the actual load condition of the system to obtain a second case starting parameter;
If the target flow control strategy is a secondary fusing flow control level strategy, the case starting rate in the case starting parameters of the target request case is adjusted to be the minimum value, and a third case starting parameter is obtained;
and determining target case starting parameter information according to the first case starting parameter, the second case starting parameter and the third case starting parameter.
4. The case scheduling method of claim 1, wherein the step of calling a preset Kafka channel according to the target case start parameter information and the case priority ranking list and performing case scheduling on the target request case according to the preset Kafka channel and the preset orchestration service cluster comprises:
selecting a target case to be processed in a preset period and a case priority ordering list corresponding to the target case from the target request cases according to the case priority ordering list and the preset period processing capacity;
acquiring case starting parameter information corresponding to the target case from the target case starting parameter information;
calling a preset Kafka channel according to the case starting parameter information corresponding to the target case and the case priority ranking list corresponding to the target case, and scheduling the target case according to the preset Kafka channel and the preset scheduling service cluster.
5. The case scheduling method of claim 4, wherein the preset Kafka channel comprises a Kafka channel corresponding to an emergency priority, a high priority, a medium priority, and a low priority; the step of calling a preset Kafka channel according to the case starting parameter information corresponding to the target case and the case priority ranking list corresponding to the target case, and performing case scheduling on the target case according to the preset Kafka channel and the preset scheduling service cluster comprises the following steps:
selecting a target Kafka channel from the Kafka channels corresponding to the emergency priority, the high priority, the medium priority and the low priority according to a priority marking result in the case priority sorting list corresponding to the target case;
and carrying out case scheduling on the target case according to the target Kafka channel and the preset scheduling service cluster.
6. The case scheduling method as set forth in claim 1, wherein the step of performing case scheduling on the requested case according to the adjusted prioritized list until the processing capability of the downstream system is restored and then releasing the bin processing operation further includes:
after the bin processing operation is released, recovering the cases marked as bin priorities in the adjusted priority ranking list to the original priorities marked as medium and low priorities in the priority ranking list before adjustment;
If the message backlog situation of the high-priority Kafka channel does not completely subside, marking the high-priority case as an emergency priority, and switching the high-priority case of the subsequent piece to the emergency-priority Kafka channel for processing until the message backlog situation of the high-priority channel subsides, switching the high-priority case back to the high-priority Kafka channel for processing.
7. A case scheduling apparatus, characterized in that the case scheduling apparatus comprises: memory, a processor and a case scheduler stored on the memory and executable on the processor, which case scheduler when executed by the processor implements the case scheduling method of any one of claims 1 to 6.
8. A storage medium having stored thereon a case scheduler, which when executed by a processor implements the case scheduling method of any one of claims 1 to 6.
9. A case scheduling device, characterized in that the case scheduling device comprises:
the priority matching module is further used for performing priority matching on the target request case according to case attribute information corresponding to the target request case, and performing priority marking on the target request case according to a priority matching result to obtain a priority marking result;
The flow control module is used for selecting a target flow control strategy from preset flow control strategies according to system load information corresponding to the preset arrangement service cluster;
the flow control module is further used for automatically adjusting the case starting parameters of the target request case according to the target flow control strategy to obtain target case starting parameter information;
the case starting processing module is used for sequencing the target request cases according to the priority marking result to obtain a case priority sequencing list;
the channel scheduling module is used for calling a preset Kafka channel according to the target case starting parameter information and the case priority ordering list, and performing case scheduling on the target request case according to the preset Kafka channel and the preset scheduling service cluster;
the channel scheduling module is also used for acquiring the number of the messages to be processed of the preset Kafka message channel; judging whether to start the bin processing operation according to the quantity of the to-be-processed messages of the Kafka message channel to obtain a judging result; according to the judging result, carrying out bin storage processing on the target request cases, and carrying out bin storage priority marking on the cases needing bin storage to obtain a bin storage priority marking result; inquiring the request cases except the cases with the bin priority identifiers from the case priority sorting list according to the bin priority marking result to obtain an inquiry result; the request cases are prioritized according to the query result, and an adjusted prioritized list is obtained; and carrying out case scheduling on the request cases according to the adjusted priority ordering list until the processing capacity of the downstream system is recovered and then releasing the bin processing operation.
CN202311007477.8A 2023-08-11 2023-08-11 Case scheduling method, equipment, storage medium and device Active CN116719630B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311007477.8A CN116719630B (en) 2023-08-11 2023-08-11 Case scheduling method, equipment, storage medium and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311007477.8A CN116719630B (en) 2023-08-11 2023-08-11 Case scheduling method, equipment, storage medium and device

Publications (2)

Publication Number Publication Date
CN116719630A CN116719630A (en) 2023-09-08
CN116719630B true CN116719630B (en) 2024-03-15

Family

ID=87875627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311007477.8A Active CN116719630B (en) 2023-08-11 2023-08-11 Case scheduling method, equipment, storage medium and device

Country Status (1)

Country Link
CN (1) CN116719630B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110099012A (en) * 2019-05-08 2019-08-06 深信服科技股份有限公司 A kind of flow control methods, system and electronic equipment and storage medium
CN115550284A (en) * 2022-09-29 2022-12-30 中国农业银行股份有限公司 Message processing method, device and equipment
CN116095006A (en) * 2022-11-01 2023-05-09 深圳市佳创视讯技术股份有限公司 Dynamic flow control method and system for video live broadcast service
WO2023143276A1 (en) * 2022-01-28 2023-08-03 阿里巴巴(中国)有限公司 Traffic control method, and device and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9319433B2 (en) * 2010-06-29 2016-04-19 At&T Intellectual Property I, L.P. Prioritization of protocol messages at a server

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110099012A (en) * 2019-05-08 2019-08-06 深信服科技股份有限公司 A kind of flow control methods, system and electronic equipment and storage medium
WO2023143276A1 (en) * 2022-01-28 2023-08-03 阿里巴巴(中国)有限公司 Traffic control method, and device and storage medium
CN115550284A (en) * 2022-09-29 2022-12-30 中国农业银行股份有限公司 Message processing method, device and equipment
CN116095006A (en) * 2022-11-01 2023-05-09 深圳市佳创视讯技术股份有限公司 Dynamic flow control method and system for video live broadcast service

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
异构Hadoop集群下的负载自适应反馈调度策略;潘佳艺 等;《计算机工程与科学》;第39卷(第3期);第413-423页 *

Also Published As

Publication number Publication date
CN116719630A (en) 2023-09-08

Similar Documents

Publication Publication Date Title
CN110727512B (en) Cluster resource scheduling method, device, equipment and storage medium
JP3818655B2 (en) Task scheduling method, system, and program product
CN111614570B (en) Flow control system and method for service grid
CN111399989B (en) Container cloud-oriented task preemption and scheduling method and system
US20130268678A1 (en) Method and Apparatus for Facilitating Fulfillment of Requests on a Communication Network
CN104168318A (en) Resource service system and resource distribution method thereof
CN108770017B (en) Dynamic equalization method and system for wireless resources
CN112799817A (en) Micro-service resource scheduling system and method
WO2019127891A1 (en) Incoming call allocation method, electronic device, and computer readable storage medium
CN111526081B (en) Mail forwarding method, device, equipment and storage medium
CN116719630B (en) Case scheduling method, equipment, storage medium and device
CN112363812A (en) Database connection queue management method based on task classification and storage medium
CN111262783B (en) Dynamic routing method and device
CN112817726A (en) Virtual machine grouping resource scheduling method based on priority under cloud environment
CN117608840A (en) Task processing method and system for comprehensive management of resources of intelligent monitoring system
CN115665157B (en) Balanced scheduling method and system based on application resource types
CN113886030A (en) Resource scheduling method, electronic device and storage medium
CN107360483B (en) Controller load balancing algorithm for software defined optical network
CN113596146B (en) Resource scheduling method and device based on big data
CN109670932A (en) Credit data calculate method, apparatus, system and computer storage medium
CN113391927A (en) Method, device and system for processing business event and storage medium
CN113419863A (en) Data distribution processing method and device based on node capability
CN114064226A (en) Resource coordination method and device for container cluster and storage medium
CN111158899A (en) Data acquisition method, data acquisition device, task management center and task management system
CN115495251B (en) Intelligent control method and system for computing resources in data integration operation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant