CN111506410A - Background batch processing service optimization method, system and storage medium - Google Patents

Background batch processing service optimization method, system and storage medium Download PDF

Info

Publication number
CN111506410A
CN111506410A CN202010316976.5A CN202010316976A CN111506410A CN 111506410 A CN111506410 A CN 111506410A CN 202010316976 A CN202010316976 A CN 202010316976A CN 111506410 A CN111506410 A CN 111506410A
Authority
CN
China
Prior art keywords
service
batch
processing
background
historical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010316976.5A
Other languages
Chinese (zh)
Other versions
CN111506410B (en
Inventor
刘通
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Si Tech Information Technology Co Ltd
Original Assignee
Beijing Si Tech Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Si Tech Information Technology Co Ltd filed Critical Beijing Si Tech Information Technology Co Ltd
Priority to CN202010316976.5A priority Critical patent/CN111506410B/en
Publication of CN111506410A publication Critical patent/CN111506410A/en
Application granted granted Critical
Publication of CN111506410B publication Critical patent/CN111506410B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2474Sequence data queries, e.g. querying versioned data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention relates to an optimization method, a system and a storage medium of background batch processing services, which are used for judging whether a background service database contains batch services to be processed or not, if so, acquiring a historical processing information label of each historical processing service in the background service database, and acquiring the single captured data quantity of the batch services to be processed according to all the historical processing information labels; capturing target batch processing service data according to the single captured data quantity for batch processing, and storing a target processing information tag into a background service database; and taking the rest part except the target batch processing service data in the batch service to be processed as the updated batch service to be processed, and acquiring the updated single captured data quantity until the updated batch service to be processed completes batch processing. The invention automatically optimizes and adjusts the quantity of data captured at one time in real time, reduces manual intervention, maximally utilizes hardware resources and greatly optimizes the performance of background batch processing service.

Description

Background batch processing service optimization method, system and storage medium
Technical Field
The invention relates to the technical field of background service processing, in particular to an optimization method, a system and a storage medium for background batch processing services.
Background
In the communication industry, such as the telecommunication industry, the telecommunication industry develops for 30 years, and users in a single province of the telecommunication industry break through the level of ten million and more than one hundred million. The large base of subscriber volumes results in large traffic volumes. Background batch processing service is one of the more common technical ways for realizing service acceptance or change in the telecommunication industry. The market part regularly releases preferential activities, and the user order relationship is unsubscribed/renewed at the beginning of the month and at the end of the month.
Background batch business processing, especially business processing of over a million grades, is generally performed by starting a corresponding background batch business process (multithreading) and circularly performing business processing, and finally all tasks to be processed are completed. The logic of a conventional background batch processing service is shown in fig. 1. Based on the background batch processing business process described in fig. 1, when the background process reads database data (DB data), the number of pieces of DB data (i.e. the M value in fig. 1) captured at a time is generally a default setting value and is written to a program or a configuration file.
Therefore, the conventional background service batch processing method has the following problems:
1. because the number of pieces of DB data (namely the M value) captured at a time is generally a default set value and is written in a program or a configuration file, and the setting of the M value depends on simple tests or historical experiences, the M value is a fixed value, and the size of the M value influences resources occupied by a background process, so that the setting of the M value is unreasonable, and the hardware resources (including a CPU, a memory, a network and the like) cannot be reasonably utilized due to the fact that the M value is too large or too small;
2. modifying the M value, wherein a start-stop program or an update configuration file is needed, and manual intervention is relied on;
3. since the hardware resources are also dynamically changed, the M value cannot be adjusted in real time according to the latest hardware resource condition, and even the adjustment needs to depend on manual intervention.
Therefore, there is a need for an optimization method for background batch processing services, which can solve the problems in the prior art that the background batch processing services are fixed in number, excessively depend on manual intervention, and cannot maximally utilize hardware resources, and can automatically optimize and adjust the background batch processing services in real time, reduce manual intervention, maximally utilize hardware resources, and greatly optimize the performance of the background batch processing services.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide an optimization method, system and storage medium for background batch processing services, which solve the problems of the prior art that the background batch processing services are fixed in number, depend on manual intervention excessively, and cannot utilize hardware resources maximally, and greatly optimize the performance of the background batch processing services.
The technical scheme for solving the technical problems is as follows:
a method for optimizing background batch processing service comprises the following steps:
step 1: reading a background service database;
step 2: judging whether the background service database contains batch services to be processed, if so, executing the step 3, otherwise, returning to the step 1;
and step 3: acquiring historical processing information labels corresponding to each historical processing service in the background service database one by one, and acquiring the single captured data quantity of the batch services to be processed according to all the historical processing information labels;
and 4, step 4: capturing target batch processing service data in the batch service to be processed according to the single captured data quantity, performing batch processing on the target batch processing service data, and storing a target processing information tag corresponding to the target batch processing service data as the historical processing information tag into the background service database;
and 5: and taking the rest part except the target batch processing service data in the batch services to be processed as the updated batch services to be processed, and repeating the steps 3 to 4 until the updated batch services to be processed complete batch processing.
The invention has the beneficial effects that: firstly, judging whether batch service data to be processed exist, if so, calculating the single captured data quantity of the subsequent batch service data to be processed, and if not, indicating that the background does not need to perform batch processing service; then, the single capture data quantity is calculated by using the historical processing information labels of the historical processing service, and the target batch processing service data is captured according to the single capture data quantity, so that the actual condition of the hardware resource at the moment is better met, and the hardware resource can be maximally utilized; when the target batch processing service data is subjected to batch processing, the corresponding target processing information tag is stored in a background service database as a historical processing information tag, so that the target processing information tag can be used as input for calculating the single-time captured data quantity during next batch processing, the calculation of the single-time captured data quantity (namely the updated single-time captured data quantity) can be conveniently carried out again on the rest part (namely the updated batch service to be processed) except the target batch processing service data in the subsequent batch service to be processed, and the real-time calculation and adjustment of the single-time captured data quantity are realized without excessively depending on manual intervention;
the optimization method of the background batch processing service solves the problems that the background batch processing service quantity is fixed, manual intervention is excessively depended on, and hardware resources cannot be utilized to the maximum extent in the prior art, can automatically optimize and adjust the single-time captured data quantity in real time, reduces the manual intervention, utilizes the hardware resources to the maximum extent, greatly optimizes the performance of the background batch processing service, and can reduce the reconstruction cost of background programs to a certain extent by adopting cluster type unified deployment.
On the basis of the technical scheme, the invention also has the following improvements:
further: before the step 3, the following steps are also included:
and judging whether historical processing services exist in the background service database, if so, sequentially executing the steps 3 to 5, otherwise, taking the default capture data quantity as the single capture data quantity of the batch services to be processed, and sequentially executing the steps 4 to 5.
The beneficial effects of the further technical scheme are as follows: when the service is processed in the background in batch, the batch service to be processed in the background service database may be the batch service processed for the first time, that is, there is no historical processing service corresponding to the batch service to be processed, and at this time, the single capture data quantity of the batch service to be processed does not need to be calculated according to the relevant information of the historical processing service, and the default capture data quantity (that is, the default value of the preset single capture data quantity) is directly used as the single capture data quantity; the method supports both an automatic adjustment mode and a traditional manual setting mode, and is high in universality.
Further: the historical processing information label comprises a task identifier corresponding to each historical processing service one by one.
The beneficial effects of the further technical scheme are as follows: through the task identification, the number of the historical processing information tags can be conveniently counted subsequently according to the task identification, so that the single-time captured data number can be conveniently calculated by adopting a corresponding method according to different historical processing information tag numbers, the value of the single-time captured data number can be automatically adjusted in real time, the capacity of collecting and calculating the adjustment performance in real time is realized, the background batch processing performance is improved, and the hardware resource utilization capacity can be maximally realized.
Further: the step 3 specifically includes:
step 31: acquiring task identifiers corresponding to each historical processing service in the background service database one by one, counting the number of all the task identifiers, and taking the number of all the task identifiers as the number of historical processing information labels;
step 32: judging whether the number of the historical processing information tags is smaller than a preset number of tags or not, if so, calculating to obtain the number of the single-time captured data by adopting a preset first captured data number calculation method; if not, calculating to obtain the single-time captured data quantity by adopting a preset second captured data quantity calculation method.
The beneficial effects of the further technical scheme are as follows: by comparing the number of the historical processing information tags counted according to the task identifiers with the number of the preset tags and calculating the number of the single-time captured data by adopting different calculation methods, the automatic adjustment of the number of the single-time captured data within a certain range is realized, so that the automatic adjustment of the performance of the background batch processing service within a certain range is realized, manual intervention is not needed, and the real-time maximized utilization of hardware resources can be ensured; wherein the number of the preset labels can be selected and adjusted according to the actual situation.
Further: the first captured data quantity calculation method specifically comprises the following steps:
calculating the quantity of the single-time captured data according to a first calculation formula;
the first calculation formula is specifically as follows:
N=M+K;
and N is the single captured data quantity, M is the default captured data quantity, and K is a captured data quantity adjusting value.
Further: the historical processing information label also comprises task starting time and task ending time corresponding to each historical processing service;
the second captured data amount calculation method specifically includes:
according to the sequence of time from first to last, obtaining the last-but-one historical processing service and the last-but-one historical processing service in all the historical processing services, obtaining the first average task processing time corresponding to the last-but-one historical processing service according to the task starting time and the task ending time corresponding to the last-but-one historical processing service, and obtaining the second average task processing time corresponding to the last-but-one historical processing service according to the task starting time and the task ending time corresponding to the last-but-one historical processing service;
judging whether the first average task processing time is less than the second average task processing time, if so, calculating the single-time data capturing quantity according to a second calculation formula, and if not, calculating the single-time data capturing quantity according to the first calculation formula;
the second calculation formula is specifically:
N=M-K。
the beneficial effects of the further technical scheme are as follows: through the first calculation formula and the second calculation formula, the adjustment of the single captured data quantity in a certain range can be realized, namely, the N value is finely adjusted in a certain range based on the M value (the default value preset by the single captured data quantity, namely the default captured data quantity), so that the actual condition of hardware resources is better met, the calculation of the single captured data quantity can be simplified to a certain extent, the complexity is reduced, and the efficiency of background batch processing service is improved; the first average task processing time and the second average task processing time can reflect the current hardware resource condition and the performance condition of the background batch processing service to a certain extent, so that the first average task processing time and the second average task processing time in the second captured data quantity calculation method are compared and used as an adjustment basis for the single captured data quantity, the actual condition is better met, and the optimization effect is better; the adjustment value of the captured data quantity can be set and adjusted according to actual conditions.
Further: in the step 4, the specific implementation of performing batch processing on the target batch processing service data is as follows:
and storing the target batch processing service data into a cache of a service process to obtain target cache service data, and processing the target cache service data.
The beneficial effects of the further technical scheme are as follows: the captured target batch processing service data is stored in a cache of the service for processing without being influenced by other service data, so that the error rate of the background in batch processing of the service is reduced, and the processing efficiency is improved in a fixed degree.
According to another aspect of the present invention, there is also provided an optimization system for background batch processing services, which is characterized by comprising a reading module, a judging module, a calculating module, a capturing module and a processing module;
the reading module is used for reading a background service database;
the judging module is used for judging whether the background service database contains batch services to be processed;
the calculation module is used for obtaining the single capture data quantity of the batch service to be processed according to all the historical processing information labels when the judgment module judges that the background service database contains the batch service to be processed;
the grabbing module is used for grabbing target batch processing service data in the batch service to be processed according to the single data grabbing quantity;
the processing module is used for carrying out batch processing on the target batch processing service data and storing a target processing information tag corresponding to the target batch processing service data into the background service database as the historical processing information tag;
the computing module is further configured to use the remaining part of the batch service to be processed excluding the target batch processing service data as the updated batch service to be processed, obtain the historical processing information tags corresponding to each historical processing service in the background service database, and obtain the updated single capture data quantity corresponding to the updated batch service to be processed according to all the historical processing information tags.
The invention has the beneficial effects that: the optimization system for the background batch processing service solves the problems that the background batch processing service is fixed in quantity, depends on manual intervention excessively and cannot utilize hardware resources to the maximum in the prior art, can automatically optimize and adjust the quantity of data captured at one time in real time, reduces the manual intervention, utilizes the hardware resources to the maximum, greatly optimizes the performance of the background batch processing service, and can reduce the reconstruction cost of background programs to a certain extent by adopting cluster type unified deployment.
On the basis of the technical scheme, the invention also has the following improvements:
further: the judging module is further configured to:
and judging whether the historical processing service exists in the background service database.
Further: the historical processing information label comprises a task identifier corresponding to each historical processing service one by one.
Further: the calculation module is specifically configured to:
acquiring task identifiers corresponding to each historical processing service in the background service database one by one, counting the number of all the task identifiers, and taking the number of all the task identifiers as the number of historical processing information labels;
judging whether the number of the historical processing information tags is smaller than a preset number of tags or not, if so, calculating to obtain the number of the single-time captured data by adopting a preset first captured data number calculation method; if not, calculating to obtain the single-time captured data quantity by adopting a preset second captured data quantity calculation method.
Further: the calculation module is further specifically configured to:
calculating the quantity of the single-time captured data according to a first calculation formula;
the first calculation formula is specifically as follows:
N=M+K;
and N is the single captured data quantity, M is the default captured data quantity, and is a captured data quantity adjusting value.
Further: the historical processing information label also comprises task starting time and task ending time corresponding to each historical processing service;
the calculation module is further specifically configured to:
according to the sequence of time from first to last, obtaining the last-but-one historical processing service and the last-but-one historical processing service in all the historical processing services, obtaining the first average task processing time corresponding to the last-but-one historical processing service according to the task starting time and the task ending time corresponding to the last-but-one historical processing service, and obtaining the second average task processing time corresponding to the last-but-one historical processing service according to the task starting time and the task ending time corresponding to the last-but-one historical processing service;
judging whether the first average task processing time is less than the second average task processing time, if so, calculating the single-time data capturing quantity according to a second calculation formula, and if not, calculating the single-time data capturing quantity according to the first calculation formula;
the second calculation formula is specifically:
N=M-K。
further: the processing module is specifically configured to:
and storing the target batch processing service data into a cache of a service process to obtain target cache service data, and processing the target cache service data.
According to another aspect of the present invention, there is provided a system for optimizing background batch processing services, including a processor, a memory, and a computer program stored in the memory and executable on the processor, where the computer program implements the steps in a method for optimizing background batch processing services according to the present invention when running.
The invention has the beneficial effects that: the optimization of the background batch processing service is realized by the computer program stored in the memory and running on the processor, the single data capturing quantity can be automatically optimized and adjusted in real time, the manual intervention is reduced, the hardware resources are utilized to the maximum extent, the performance of the background batch processing service is greatly optimized, and the transformation cost of the background program can be reduced to a certain extent by adopting cluster type unified deployment.
In accordance with another aspect of the present invention, there is provided a computer storage medium comprising: at least one instruction which, when executed, implements a step in a method for background batch processing service optimization of the present invention.
The invention has the beneficial effects that: by executing a computer storage medium containing at least one instruction, the optimization of the background batch processing service is realized, the single captured data quantity can be automatically optimized and adjusted in real time, the manual intervention is reduced, the hardware resources are utilized to the maximum extent, the performance of the background batch processing service is greatly optimized, and the transformation cost of a background program can be reduced to a certain extent by adopting cluster type unified deployment.
Drawings
FIG. 1 is a schematic flow chart of a conventional background batch processing business method;
fig. 2 is a schematic flowchart of an optimization method for background batch processing services according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart illustrating a process of obtaining a single capture data amount according to a first embodiment of the present invention;
fig. 4 is a schematic view of a complete flow of an optimization method for background batch processing service according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an optimization system for background batch processing services according to a second embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
The present invention will be described with reference to the accompanying drawings.
In an embodiment, as shown in fig. 2, a method for optimizing a background batch processing service includes the following steps:
s1: reading a background service database;
s2: judging whether the background service database contains batch services to be processed, if so, executing S3, otherwise, returning to S1;
s3: acquiring historical processing information labels corresponding to each historical processing service in the background service database one by one, and acquiring the single captured data quantity of the batch services to be processed according to all the historical processing information labels;
s4: capturing target batch processing service data in the batch service to be processed according to the single captured data quantity, performing batch processing on the target batch processing service data, and storing a target processing information tag corresponding to the target batch processing service data as the historical processing information tag into the background service database;
s5: and taking the rest part except the target batch processing service data in the batch service to be processed as the updated batch service to be processed, and repeating S3-S4 until the batch processing of the updated batch service to be processed is completed.
Firstly, judging whether batch service data to be processed exist, if so, calculating the single captured data quantity of the subsequent batch service data to be processed, and if not, indicating that the background does not need to perform batch processing service; then, the single capture data quantity is calculated by using the historical processing information labels of the historical processing service, and the target batch processing service data is captured according to the single capture data quantity, so that the actual condition of the hardware resource at the moment is better met, and the hardware resource can be maximally utilized; when the target batch processing service data is subjected to batch processing, the corresponding target processing information tag is stored in a background service database as a historical processing information tag, so that the target processing information tag can be used as input for calculating the single-time captured data quantity during next batch processing, the calculation of the single-time captured data quantity (namely the updated single-time captured data quantity) can be conveniently carried out again on the rest part (namely the updated batch service to be processed) except the target batch processing service data in the subsequent batch service to be processed, and the real-time calculation and adjustment of the single-time captured data quantity are realized without excessively depending on manual intervention;
the optimization method for the background batch processing service solves the problems that the background batch processing service is fixed in quantity, depends on manual intervention excessively and cannot utilize hardware resources to the maximum in the prior art, can automatically optimize and adjust the quantity of data captured at one time in real time, reduces the manual intervention, utilizes the hardware resources to the maximum, greatly optimizes the performance of the background batch processing service, and can reduce the transformation cost of background programs to a certain extent by adopting cluster type unified deployment.
Specifically, the background service database in this embodiment is a DB database in the telecommunications industry.
Preferably, before S3, the method further comprises the following steps:
and judging whether historical processing services exist in the background service database, if so, sequentially executing S3 to S5, otherwise, taking the default capture data quantity as the single capture data quantity of the batch services to be processed, and sequentially executing S4 to S5.
When the service is processed in the background in batch, the batch service to be processed in the background service database may be the batch service processed for the first time, that is, there is no historical processing service corresponding to the batch service to be processed, and at this time, the single capture data quantity of the batch service to be processed does not need to be calculated according to the relevant information of the historical processing service, and the default capture data quantity (that is, the default value of the preset single capture data quantity) is directly used as the single capture data quantity; the method supports both an automatic adjustment mode and a traditional manual setting mode, and is high in universality. .
Specifically, in this embodiment, the number of the history processing services in the DB database may be 0, may also be 1, and may also be greater than 1, and the default number of the captured data is M, which is preset to 20 and may be adjusted manually.
Preferably, the historical processing information tag includes a task identifier corresponding to each historical processing service.
Through the task identification, the number of the historical processing information tags can be conveniently counted subsequently according to the task identification, so that the single-time captured data number can be conveniently calculated by adopting a corresponding method according to different historical processing information tag numbers, the value of the single-time captured data number can be automatically adjusted in real time, the capacity of collecting and calculating the adjustment performance in real time is realized, the background batch processing performance is improved, and the hardware resource utilization capacity can be maximally realized.
Preferably, as shown in fig. 3, S3 specifically includes:
s31: acquiring task identifiers corresponding to each historical processing service in the background service database one by one, counting the number of all the task identifiers, and taking the number of all the task identifiers as the number of historical processing information labels;
s32: judging whether the number of the historical processing information tags is smaller than a preset number of tags or not, if so, calculating to obtain the number of the single-time captured data by adopting a preset first captured data number calculation method; if not, calculating to obtain the single-time captured data quantity by adopting a preset second captured data quantity calculation method.
By comparing the number of the historical processing information tags counted according to the task identifiers with the number of the preset tags and calculating the number of the single-time captured data by adopting different calculation methods, the automatic adjustment of the number of the single-time captured data within a certain range is realized, so that the automatic adjustment of the performance of the background batch processing service within a certain range is realized, manual intervention is not needed, and the real-time maximized utilization of hardware resources can be ensured; wherein the number of the preset labels can be selected and adjusted according to the actual situation.
Specifically, the number of history processing information tags in this embodiment is L, the number of preset tags is 2, when L < 2 (i.e., L is 1, where L is 0 means that there is no history processing service in the DB database, then the N value is output according to the default captured data number M value), the N value is calculated by using a first captured data number calculation method, and when L is greater than or equal to 2, the N value is calculated by using a second captured data number calculation method.
Preferably, the first captured data amount calculation method specifically includes:
calculating the quantity of the single-time captured data according to a first calculation formula;
the first calculation formula is specifically as follows:
N=M+K;
and N is the single captured data quantity, M is the default captured data quantity, and K is a captured data quantity adjusting value.
Preferably, the historical processing information tag further includes a task start time and a task end time corresponding to each historical processing service;
the second captured data amount calculation method specifically includes:
according to the sequence of time from first to last, obtaining the last-but-one historical processing service and the last-but-one historical processing service in all the historical processing services, obtaining the first average task processing time corresponding to the last-but-one historical processing service according to the task starting time and the task ending time corresponding to the last-but-one historical processing service, and obtaining the second average task processing time corresponding to the last-but-one historical processing service according to the task starting time and the task ending time corresponding to the last-but-one historical processing service;
judging whether the first average task processing time is less than the second average task processing time, if so, calculating the single-time data capturing quantity according to a second calculation formula, and if not, calculating the single-time data capturing quantity according to the first calculation formula;
the second calculation formula is specifically:
N=M-K。
through the first calculation formula and the second calculation formula, the adjustment of the single captured data quantity in a certain range can be realized, namely, the N value is finely adjusted in a certain range based on the M value (the default value preset by the single captured data quantity, namely the default captured data quantity), so that the actual condition of hardware resources is better met, the calculation of the single captured data quantity can be simplified to a certain extent, the complexity is reduced, and the efficiency of background batch processing service is improved; the first average task processing time and the second average task processing time can reflect the current hardware resource situation and the performance situation of the background batch processing service to a certain extent, so that the first average task processing time and the second average task processing time in the second captured data quantity calculation method are compared and used as the adjustment basis of the single captured data quantity, the actual situation is better met, and the optimization effect is better.
Specifically, in this embodiment, the first average task processing time corresponding to the last-but-one history processing service is T [ -1], the second average task processing time corresponding to the last-but-one history processing service is T [ -2], and the value of the captured data amount adjustment value K is set to 5, so that when T [ -1] < T [ -2], the value N is calculated according to N ═ M-5, and when T ≧ 1] ≧ T [ -2], the value N is calculated according to N ═ M + 5.
Preferably, in S4, the implementation of performing batch processing on the target batch processing service data is as follows:
and storing the target batch processing service data into a cache of a service process to obtain target cache service data, and processing the target cache service data.
The captured target batch processing service data is stored in a cache of the service for processing without being influenced by other service data, so that the error rate of the background in batch processing of the service is reduced, and the processing efficiency is improved in a fixed degree.
Specifically, a complete flow of the optimization method for the background batch processing service in this embodiment is shown in fig. 4.
In a second embodiment, as shown in fig. 5, a system for optimizing background batch processing services is characterized by comprising a reading module, a judging module, a calculating module, a capturing module and a processing module;
the reading module is used for reading a background service database;
the judging module is used for judging whether the background service database contains batch services to be processed;
the calculation module is used for obtaining the single capture data quantity of the batch service to be processed according to all the historical processing information labels when the judgment module judges that the background service database contains the batch service to be processed;
the grabbing module is used for grabbing target batch processing service data in the batch service to be processed according to the single data grabbing quantity;
the processing module is used for carrying out batch processing on the target batch processing service data and storing a target processing information tag corresponding to the target batch processing service data into the background service database as the historical processing information tag;
the computing module is further configured to use the remaining part of the batch service to be processed excluding the target batch processing service data as the updated batch service to be processed, obtain the historical processing information tags corresponding to each historical processing service in the background service database, and obtain the updated single capture data quantity corresponding to the updated batch service to be processed according to all the historical processing information tags.
The optimization system for the background batch processing service solves the problems that the background batch processing service is fixed in quantity, depends on manual intervention excessively and cannot utilize hardware resources to the maximum in the prior art, can optimize and adjust the quantity of data captured at a time automatically in real time, reduces the manual intervention, utilizes the hardware resources to the maximum, greatly optimizes the performance of the background batch processing service, and reduces the transformation cost of background programs to a certain extent by adopting cluster type unified deployment.
Preferably, the determining module is further configured to:
and judging whether the historical processing service exists in the background service database.
When the service is processed in the background in batch, the batch service to be processed in the background service database may be the batch service processed for the first time, that is, there is no historical processing service corresponding to the batch service to be processed, and at this time, the single captured data quantity of the batch service to be processed does not need to be calculated according to the related information of the historical processing service, and the default captured data quantity M value is directly used as the single captured data quantity N value; through above-mentioned judge module, both supported automatic adjustment mode, also supported traditional artifical setting mode, the commonality is stronger.
Preferably, the historical processing information tag includes a task identifier corresponding to each historical processing service.
Through the task identification, the number of the historical processing information tags can be conveniently counted subsequently according to the task identification, so that the single-time captured data number can be conveniently calculated by adopting a corresponding method according to different historical processing information tag numbers, the value of the single-time captured data number can be automatically adjusted in real time, the capacity of collecting and calculating the adjustment performance in real time is realized, the background batch processing performance is improved, and the hardware resource utilization capacity can be maximally realized.
Preferably, the calculation module is specifically configured to:
acquiring task identifiers corresponding to each historical processing service in the background service database one by one, counting the number of all the task identifiers and taking the number of all the task identifiers as the number of historical processing information labels;
judging whether the number of the historical processing information tags is smaller than a preset number of tags or not, if so, calculating to obtain the number of the single-time captured data by adopting a preset first captured data number calculation method; if not, calculating to obtain the single-time captured data quantity by adopting a preset second captured data quantity calculation method.
Through the computing module, the automatic adjustment of the single-time captured data quantity within a certain range is realized, so that the automatic adjustment of the performance of the background batch processing service within a certain range is realized, manual intervention is not needed, and the real-time maximized utilization of hardware resources can be ensured; wherein the number of the preset labels can be selected and adjusted according to the actual situation.
Preferably, the calculation module is further specifically configured to:
calculating the quantity of the single-time captured data according to a first calculation formula;
the first calculation formula is specifically as follows:
N=M+K;
and N is the single captured data quantity, M is the default captured data quantity, and is a captured data quantity adjusting value.
Preferably, the historical processing information tag further includes a task start time and a task end time corresponding to each historical processing service;
the calculation module is further specifically configured to:
according to the sequence of time from first to last, obtaining the last-but-one historical processing service and the last-but-one historical processing service in all the historical processing services, obtaining the first average task processing time corresponding to the last-but-one historical processing service according to the task starting time and the task ending time corresponding to the last-but-one historical processing service, and obtaining the second average task processing time corresponding to the last-but-one historical processing service according to the task starting time and the task ending time corresponding to the last-but-one historical processing service;
judging whether the first average task processing time is less than the second average task processing time, if so, calculating the single-time data capturing quantity according to a second calculation formula, and if not, calculating the single-time data capturing quantity according to the first calculation formula;
the second calculation formula is specifically:
N=M-K。
through the computing module, on one hand, the actual situation of hardware resources is better met, on the other hand, the computation of the data quantity captured at one time can be simplified to a certain extent, the complexity is reduced, and the efficiency of background batch processing service is improved; the first average task processing time and the second average task processing time can reflect the current hardware resource situation and the performance situation of the background batch processing service to a certain extent, so that the first average task processing time and the second average task processing time in the second captured data quantity calculation method are compared and used as the adjustment basis of the single captured data quantity, the actual situation is better met, and the optimization effect is better.
Specifically, the K value in the present embodiment is set to 5.
Preferably, the processing module is specifically configured to:
and storing the target batch processing service data into a cache of a service process to obtain target cache service data, and processing the target cache service data.
The processing module stores the captured target batch processing service data into the cache of the service for processing, is not influenced by other service data, reduces the error rate of the background during batch processing of the service, and improves the processing efficiency in a fixed degree.
Third embodiment, based on the first embodiment and the second embodiment, the present embodiment further discloses an optimization system for background batch processing services, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor, and the computer program implements the specific steps S1 to S5 shown in fig. 2 when running.
The optimization of the background batch processing service is realized by the computer program stored in the memory and running on the processor, the single data capturing quantity can be automatically optimized and adjusted in real time, the manual intervention is reduced, the hardware resources are utilized to the maximum extent, the performance of the background batch processing service is greatly optimized, and the transformation cost of the background program can be reduced to a certain extent by adopting cluster type unified deployment.
Specifically, the present embodiment includes at least one X86 host (three are suggested), on which a processor, a memory, and a computer program are included, the computer program including an input unit, a logical operation unit, and an output unit; the relevant information (namely history processing information label) of the history processing service in the DB database is used as the content input by the input unit, is input into the logic operation unit, automatically calculates the value of the single captured data quantity N, and then is output by the output unit.
The input unit of the present embodiment:
(1) supporting program interface or message mode input;
(2) and manual configuration modes are supported, and manual intervention is realized.
A logic operation unit:
(1) according to the input, the N value is automatically calculated, so that increase and decrease can be realized, continuous operation is realized, and real-time calculation and adjustment in the processing process are realized;
(2) the method has the advantages that records are distinguished according to task identifiers in historical processing information labels, namely, thread/process/host level records are supported;
(3) the setting mode can be set manually or according to the setting mode.
An output unit:
and outputting a new N value for reading by a batch processing program according to the logical operation result.
The present embodiment also provides a computer storage medium having at least one instruction stored thereon, where the instruction when executed implements the specific steps of S1-S5.
By executing a computer storage medium containing at least one instruction, the optimization of the background batch processing service is realized, the single captured data quantity can be automatically optimized and adjusted in real time, the manual intervention is reduced, the hardware resources are utilized to the maximum extent, the performance of the background batch processing service is greatly optimized, and the transformation cost of a background program can be reduced to a certain extent by adopting cluster type unified deployment.
Details of S1 to S5 in this embodiment are not described in detail in the first embodiment and the detailed descriptions in fig. 2 to fig. 4, which are not repeated herein.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A method for optimizing background batch processing service is characterized by comprising the following steps:
step 1: reading a background service database;
step 2: judging whether the background service database contains batch services to be processed, if so, executing the step 3, otherwise, returning to the step 1;
and step 3: acquiring historical processing information labels corresponding to each historical processing service in the background service database one by one, and acquiring the single captured data quantity of the batch services to be processed according to all the historical processing information labels;
and 4, step 4: capturing target batch processing service data in the batch service to be processed according to the single captured data quantity, performing batch processing on the target batch processing service data, and storing a target processing information tag corresponding to the target batch processing service data as the historical processing information tag into the background service database;
and 5: and taking the rest part except the target batch processing service data in the batch services to be processed as the updated batch services to be processed, and repeating the steps 3 to 4 until the updated batch services to be processed complete batch processing.
2. The method for optimizing background batch processing services according to claim 1, further comprising the following steps before said step 3:
and judging whether historical processing services exist in the background service database, if so, sequentially executing the steps 3 to 5, otherwise, taking the default capture data quantity as the single capture data quantity of the batch services to be processed, and sequentially executing the steps 4 to 5.
3. The background batch processing service optimization method according to claim 2, wherein the historical processing information tag includes a task identifier corresponding to each historical processing service.
4. The method for optimizing background batch processing services according to claim 3, wherein the step 3 specifically comprises:
step 31: acquiring task identifiers corresponding to each historical processing service in the background service database one by one, counting the number of all the task identifiers, and taking the number of all the task identifiers as the number of historical processing information labels;
step 32: judging whether the number of the historical processing information tags is smaller than a preset number of tags or not, if so, calculating to obtain the number of the single-time captured data by adopting a preset first captured data number calculation method; if not, calculating to obtain the single-time captured data quantity by adopting a preset second captured data quantity calculation method.
5. The optimization method of background batch processing services according to claim 4, wherein the first captured data amount calculation method specifically comprises:
calculating the quantity of the single-time captured data according to a first calculation formula;
the first calculation formula is specifically as follows:
N=M+K;
and N is the single captured data quantity, M is the default captured data quantity, and K is a captured data quantity adjusting value.
6. The background batch processing service optimization method according to claim 5, wherein the historical processing information tag further includes a task start time and a task end time corresponding to each historical processing service;
the second captured data amount calculation method specifically includes:
according to the sequence of time from first to last, obtaining the last-but-one historical processing service and the last-but-one historical processing service in all the historical processing services, obtaining the first average task processing time corresponding to the last-but-one historical processing service according to the task starting time and the task ending time corresponding to the last-but-one historical processing service, and obtaining the second average task processing time corresponding to the last-but-one historical processing service according to the task starting time and the task ending time corresponding to the last-but-one historical processing service;
judging whether the first average task processing time is less than the second average task processing time, if so, calculating the single-time data capturing quantity according to a second calculation formula, and if not, calculating the single-time data capturing quantity according to the first calculation formula;
the second calculation formula is specifically:
N=M-K。
7. the method for optimizing background batch processing service according to claim 1, wherein in the step 4, the batch processing of the target batch processing service data is implemented by:
and storing the target batch processing service data into a cache of a service process to obtain target cache service data, and processing the target cache service data.
8. The optimization system for the background batch processing service is characterized by comprising a reading module, a judging module, a calculating module, a grabbing module and a processing module;
the reading module is used for reading a background service database;
the judging module is used for judging whether the background service database contains batch services to be processed;
the calculation module is used for obtaining the single capture data quantity of the batch service to be processed according to all the historical processing information labels when the judgment module judges that the background service database contains the batch service to be processed;
the grabbing module is used for grabbing target batch processing service data in the batch service to be processed according to the single data grabbing quantity;
the processing module is used for carrying out batch processing on the target batch processing service data and storing a target processing information tag corresponding to the target batch processing service data into the background service database as the historical processing information tag;
the computing module is further configured to use the remaining part of the batch service to be processed excluding the target batch processing service data as the updated batch service to be processed, obtain the historical processing information tags corresponding to each historical processing service in the background service database, and obtain the updated single capture data quantity corresponding to the updated batch service to be processed according to all the historical processing information tags.
9. A background batch processing business optimization system, comprising a processor, a memory and a computer program stored in the memory and executable on the processor, the computer program when executed implementing the method steps of any of claims 1 to 7.
10. A computer storage medium, the computer storage medium comprising: at least one instruction which, when executed, implements the method steps of any one of claims 1 to 7.
CN202010316976.5A 2020-04-21 2020-04-21 Background batch processing business optimization method, system and storage medium Active CN111506410B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010316976.5A CN111506410B (en) 2020-04-21 2020-04-21 Background batch processing business optimization method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010316976.5A CN111506410B (en) 2020-04-21 2020-04-21 Background batch processing business optimization method, system and storage medium

Publications (2)

Publication Number Publication Date
CN111506410A true CN111506410A (en) 2020-08-07
CN111506410B CN111506410B (en) 2023-05-12

Family

ID=71867589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010316976.5A Active CN111506410B (en) 2020-04-21 2020-04-21 Background batch processing business optimization method, system and storage medium

Country Status (1)

Country Link
CN (1) CN111506410B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112905635A (en) * 2021-03-11 2021-06-04 深圳市分期乐网络科技有限公司 Service processing method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110154358A1 (en) * 2009-12-17 2011-06-23 International Business Machines Corporation Method and system to automatically optimize execution of jobs when dispatching them over a network of computers
CN108256706A (en) * 2016-12-28 2018-07-06 平安科技(深圳)有限公司 Method for allocating tasks and device
US20190147430A1 (en) * 2017-11-10 2019-05-16 Apple Inc. Customizing payment sessions with machine learning models
CN110297711A (en) * 2019-05-16 2019-10-01 平安科技(深圳)有限公司 Batch data processing method, device, computer equipment and storage medium
CN110457614A (en) * 2019-07-03 2019-11-15 南方电网数字电网研究院有限公司 Reduce data increment update method, device and the computer equipment of Data Concurrent amount

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110154358A1 (en) * 2009-12-17 2011-06-23 International Business Machines Corporation Method and system to automatically optimize execution of jobs when dispatching them over a network of computers
CN108256706A (en) * 2016-12-28 2018-07-06 平安科技(深圳)有限公司 Method for allocating tasks and device
US20190147430A1 (en) * 2017-11-10 2019-05-16 Apple Inc. Customizing payment sessions with machine learning models
CN110297711A (en) * 2019-05-16 2019-10-01 平安科技(深圳)有限公司 Batch data processing method, device, computer equipment and storage medium
CN110457614A (en) * 2019-07-03 2019-11-15 南方电网数字电网研究院有限公司 Reduce data increment update method, device and the computer equipment of Data Concurrent amount

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PHILIP DEXTER 等: "An Error-Reflective Consistency Model for Distributed Data Stores", 《2019 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM (IPDPS)》 *
宋亚奇等: "云计算技术在输电线路状态监测***中的应用", 《数学的实践与认识》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112905635A (en) * 2021-03-11 2021-06-04 深圳市分期乐网络科技有限公司 Service processing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN111506410B (en) 2023-05-12

Similar Documents

Publication Publication Date Title
CN109104336B (en) Service request processing method and device, computer equipment and storage medium
CN112152759A (en) Data transmission method, data transmission system, equipment and storage medium
CN108196959B (en) Resource management method and device of ETL system
CN106960054B (en) Data file access method and device
CN111580939B (en) Method and device for processing transactions in hierarchical and asynchronous mode
CN111506410A (en) Background batch processing service optimization method, system and storage medium
CN110032578B (en) Mass data query caching method and device
CN110784356B (en) Automatic flow playback method
CN111309442B (en) Method, device, system, medium and equipment for adjusting number of micro-service containers
CN115174686B (en) Method and device for dynamically adjusting weights of multiple service channels based on service efficiency
CN111262783B (en) Dynamic routing method and device
CN116382892B (en) Load balancing method and device based on multi-cloud fusion and cloud service
CN110222046B (en) List data processing method, device, server and storage medium
CN111913660A (en) Dotting data processing method and system
CN112925472A (en) Request processing method and device, electronic equipment and computer storage medium
CN111047306A (en) Parallel transaction processing method and device for transaction input set
CN114143263B (en) Method, equipment and medium for limiting current of user request
CN108683612B (en) Message acquisition method and device
CN103051975A (en) P2P (peer to peer) cache data elimination method
CN114047881A (en) Network data packet storage device and method based on user strategy
CN111158899A (en) Data acquisition method, data acquisition device, task management center and task management system
CN111179060A (en) Transaction path selection method and device in transfer processing process
CN113590337B (en) Method and device for automatically adjusting cloud host configuration in cloud environment
CN113923465B (en) Processing system, method, device, equipment and medium for currency operation
CN116954869B (en) Task scheduling system, method and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant