CN114064154A - Configuration file management method, device, server and storage medium - Google Patents

Configuration file management method, device, server and storage medium Download PDF

Info

Publication number
CN114064154A
CN114064154A CN202111475890.8A CN202111475890A CN114064154A CN 114064154 A CN114064154 A CN 114064154A CN 202111475890 A CN202111475890 A CN 202111475890A CN 114064154 A CN114064154 A CN 114064154A
Authority
CN
China
Prior art keywords
configuration
configuration items
file
configuration file
items
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111475890.8A
Other languages
Chinese (zh)
Inventor
伍育珂
甘兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Digital Guangdong Network Construction Co Ltd
Original Assignee
Digital Guangdong Network Construction Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Digital Guangdong Network Construction Co Ltd filed Critical Digital Guangdong Network Construction Co Ltd
Priority to CN202111475890.8A priority Critical patent/CN114064154A/en
Publication of CN114064154A publication Critical patent/CN114064154A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a method, a device, a server and a storage medium for managing configuration files, which comprise the following steps: acquiring a configuration file, and sequencing all configuration items contained in the configuration file based on the reading times in the process of running a service process based on the configuration file; sequentially caching all the configuration items to a process memory according to the sequence of the reading times from high to low; and caching the rest configuration items to the local file after the data volume of the cached configuration items is larger than or equal to the storage threshold value of the process memory, and continuing to run the service process based on the configuration file. According to the technical scheme, the configuration items with high reading times are cached in the process memory, the configuration items with low reading times are cached in the local file, and after the data volume of the configuration items cached in the process memory is larger than the storage threshold value of the process memory, the rest configuration items are cached in the local file, so that the storage pressure of the process memory is reduced, the loading time of the service process is reduced, and the operation efficiency of the service process is improved.

Description

Configuration file management method, device, server and storage medium
Technical Field
The embodiment of the invention relates to computer technology, in particular to a method, a device, a server and a storage medium for managing configuration files.
Background
With the development of information technology, the electronic government affairs are also promoted in innovation and lead to government management systems, and in order to make paperless approval and handling processes of departments, the government departments build an electronic government affair system by means of development technologies such as JAVA, NET and the like. The electronic government affair system is an information service and information processing system which is based on the internet technology and faces to the inside of government organs and other government institutions, and the system utilizes the high modern information technology to carry out informatization transformation on governments so as to improve the legal administration level of government departments.
In the existing e-government affair system, configuration information required by operation of each service process in the e-government affair system generally needs to be centrally stored in a configuration center. Since configuration information is generally added to the configuration center by developers in the development stage, configuration items which are invalid in the configuration information are more and more in the process of time. The configuration item cannot be easily deleted by the worker because the worker cannot determine whether the configuration item needs to be called continuously. Further, configuration files are more and more bloated, the loading time of each service process in the e-government affair system is longer and longer, and the working experience is influenced.
Disclosure of Invention
The invention provides a method and a device for managing a configuration file, a server and a storage medium, which are used for improving the running efficiency of a service process.
In a first aspect, an embodiment of the present invention provides a method for managing a configuration file, where the method includes:
acquiring a configuration file, and sequencing all configuration items contained in the configuration file based on the reading times in the process of running a service process based on the configuration file;
sequentially caching the configuration items to a process memory according to the sequence of the reading times from high to low;
after the data volume of the cached configuration items is larger than or equal to the storage threshold value of the process memory, caching the rest configuration items to a local file, and continuing to run the service process based on the configuration file.
The embodiment of the invention provides a configuration file management method, which comprises the following steps: acquiring a configuration file, and sequencing all configuration items contained in the configuration file based on the reading times in the process of running a service process based on the configuration file; sequentially caching the configuration items to a process memory according to the sequence of the reading times from high to low; after the data volume of the cached configuration items is larger than or equal to the storage threshold value of the process memory, caching the rest configuration items to a local file, and continuing to run the service process based on the configuration file. According to the technical scheme, after the service process obtains the configuration file, the service process can be operated based on the configuration file, the service process needs to read the configuration items contained in the configuration file in the operation process, the reading times of the configuration items can be determined certainly, the configuration items are sequenced according to the reading times, the configuration items with a large reading time are cached in the process memory, the configuration items with a small reading time are cached in the local file, specifically, the configuration items can be cached in the process memory sequentially according to the sequence from high to low of the reading times, after the data quantity of the configuration items cached in the process memory is larger than the storage threshold value of the process memory, the rest configuration items are cached in the local file, and the service process is operated based on the configuration file continuously. Under the condition of reducing the storage pressure of a process memory, the smooth operation of the service process is realized, the loading time of the service process is reduced, and the operation efficiency of the service process is improved.
Further, in the process of running the service process based on the configuration file, sorting the configuration items included in the configuration file based on the number of times of reading includes:
in the process of running the service process based on the configuration file, if the data volume of the configuration file is greater than the maximum memory volume of the process memory, sequencing each configuration item contained in the configuration file based on the reading times.
Further, in the process of running the service process based on the configuration file, sorting the configuration items included in the configuration file based on the number of reading times, further comprising:
and sequencing each configuration item contained in the configuration file based on the reading times at preset time intervals in the process of running the service process based on the configuration file.
Further, in the process of running the service process based on the configuration file, the method includes:
and running a preset function, and updating the reading times of the configuration items based on the preset function when the configuration items are read.
Further, updating the number of readings of the configuration item based on the preset function includes:
updating the reading times of the configuration items cached to the process memory based on the preset function;
updating the reading times of the configuration items cached to the local file based on the preset function.
Further, the method further comprises:
determining the similarity between the configuration items, and if the similarity of any plurality of configuration items is greater than a preset threshold value, determining the configuration items as similar configuration items;
and deleting other configuration items while keeping any configuration item.
Further, the method further comprises:
if the service process is a single-node service, storing the configuration item based on a local disk before the service process is restarted;
and if the service process is cluster service, storing the configuration item based on a preset database before the service process is restarted.
In a second aspect, an embodiment of the present invention further provides a device for managing configuration files, including:
the sequencing module is used for acquiring the configuration files and sequencing the configuration items contained in the configuration files based on the reading times in the process of running the service process based on the configuration files;
the first cache module is used for caching the configuration items to a process memory in sequence according to the sequence of the reading times from high to low;
and the second caching module is used for caching the rest configuration items to a local file and continuing to run the service process based on the configuration file after the data volume of the cached configuration items is greater than or equal to the storage threshold value of the process memory.
In a third aspect, an embodiment of the present invention further provides a server, where the server includes:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the profile management method as described in any of the first aspects.
In a fourth aspect, embodiments of the present invention also provide a storage medium containing computer-executable instructions for performing the method of profile management according to any one of the first aspect when executed by a computer processor.
In a fifth aspect, the present application provides a computer program product comprising computer instructions which, when run on a computer, cause the computer to perform the method of profile management as provided in the first aspect.
It should be noted that all or part of the computer instructions may be stored on the computer readable storage medium. The computer-readable storage medium may be packaged with a processor of the profile management apparatus, or may be packaged separately from the processor of the profile management apparatus, which is not limited in this application.
For the descriptions of the second, third, fourth and fifth aspects in this application, reference may be made to the detailed description of the first aspect; in addition, for the beneficial effects described in the second aspect, the third aspect, the fourth aspect and the fifth aspect, reference may be made to the beneficial effect analysis of the first aspect, and details are not repeated here.
In the present application, the names of the above-mentioned profile management means do not limit the devices or functional modules themselves, and in actual implementation, these devices or functional modules may appear by other names. Insofar as the functions of the respective devices or functional modules are similar to those of the present application, they fall within the scope of the claims of the present application and their equivalents.
These and other aspects of the present application will be more readily apparent from the following description.
Drawings
Fig. 1 is a flowchart of a method for managing configuration files according to an embodiment of the present invention;
fig. 2 is a flowchart of a configuration file management method according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a configuration file management apparatus according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a server according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone.
The terms "first" and "second" and the like in the description and drawings of the present application are used for distinguishing different objects or for distinguishing different processes for the same object, and are not used for describing a specific order of the objects.
Furthermore, the terms "including" and "having," and any variations thereof, as referred to in the description of the present application, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like. In addition, the embodiments and features of the embodiments in the present invention may be combined with each other without conflict.
It should be noted that in the embodiments of the present application, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the present application, the meaning of "a plurality" means two or more unless otherwise specified.
Example one
Fig. 1 is a flowchart of a configuration file management method according to an embodiment of the present invention, where this embodiment is applicable to a case where a data size of a configuration file is larger than a local service process memory, and the method may be executed by a configuration file management apparatus, as shown in fig. 1, specifically including the following steps:
and step 110, acquiring the configuration file, and sequencing all configuration items contained in the configuration file based on the reading times in the process of running the service process based on the configuration file.
The service process can be a service process of the e-government affair system, and normal operation of the service process can realize normal operation of the e-government affair system.
The service process may obtain a configuration file from the remote configuration center, the configuration file may contain configuration information, and the configuration information may contain a plurality of configuration items.
Specifically, in the process of acquiring the configuration file and operating based on the configuration file, the service process may read a plurality of configuration items in the configuration file to realize normal operation of the service process. Of course, in the process of reading the configuration items by the service process, a preset function may be called, the number of times of reading of each configuration item is determined based on the preset function, and each configuration item is sorted according to the number of times of reading.
In the embodiment of the invention, the service process can call the preset function in the running process to determine the reading times of each configuration item in the running process, and then the configuration items are sequenced according to the reading times to determine the importance of each configuration item on the running of the service process.
And step 120, sequentially caching the configuration items to the process memory according to the sequence of the reading times from high to low.
The process memory is a storage unit included in the service process, and when the service process runs, the configuration item can be loaded based on the process memory so as to realize normal running of the service process. When the memory space of the process memory is larger than the maximum memory space, the loading speed of the service process is reduced, and the normal operation of the e-government system is influenced; and when the memory amount of the process memory is smaller than the memory threshold value, the service process can normally run. Of course, the maximum storage amount and the storage threshold of the process memory may be set according to actual requirements, and are not specifically limited herein.
Specifically, after the configuration items are sorted based on the read times, since the cache priority of the configuration item with a large read time is higher, the configuration item with a large read time can be cached in the process memory, so as to read the configuration item again. And when the memory capacity of the process memory is smaller than the memory threshold value, the service process can normally run. When the data amount of the cached configuration item in the process memory is greater than or equal to the storage threshold, the caching of the configuration item in the process memory may be stopped.
In the embodiment of the invention, the configuration items with more reading times in the running process of the service process are cached to the process memory which is convenient to load, so that the configuration items with more reading times can be read for multiple times, the total reading events are reduced, and the loading speed of the service process is improved.
And step 130, caching the rest configuration items to the local file after the data volume of the cached configuration items is greater than or equal to the storage threshold value of the process memory, and continuing to run the service process based on the configuration file.
The local file can also be used for caching the configuration items, so that the persistence of the configuration items is realized. If a configuration item with sufficient data size has been cached in the process memory and the continued caching of the configuration item into the process memory affects the loading speed of the service process, the configuration item may be cached into the local file.
Specifically, while caching the configuration item to the process memory, it may be determined that the data amount of the configuration item cached to the process memory exists, and if the data amount of the cached configuration item is greater than or equal to the storage threshold of the process memory, the caching of the configuration item to the process memory is stopped, and the configuration item is cached to the local file. Of course, after the cache is finished, the service process may continue to be run based on the configuration file cached to the process memory and the local file.
In practical application, when a service process reads a configuration item cached to a local file, the configuration item needs to be loaded to a process memory first, and then the configuration item is read based on the process memory. Of course, the first reading speed for reading the configuration item cached to the local file is less than the second reading speed for reading the configuration item cached to the process memory. Because the reading times of the configuration items cached to the local file are less than the reading times of the configuration items cached to the process memory, the caching mode can improve the loading speed of the service process to the maximum extent.
In the embodiment of the invention, the configuration items with more reading times have higher caching priority, and the configuration items with more reading times can be cached to the process memory. And after sequencing the configuration items based on the reading times, sequentially caching the configuration items to the process memory according to the sequence of the reading times from high to low, caching the rest configuration items to a local file after the data volume of the configuration items cached to the process memory is larger than the storage threshold value of the process memory, and continuing to run the service process based on the configuration file. And realizing the normal operation of the service process under the condition of maximizing the buffer amount of the process memory.
The configuration file management method provided by the embodiment of the invention comprises the following steps: acquiring a configuration file, and sequencing all configuration items contained in the configuration file based on the reading times in the process of running a service process based on the configuration file; sequentially caching all the configuration items to a process memory according to the sequence of the reading times from high to low; and caching the rest configuration items to the local file after the data volume of the cached configuration items is larger than or equal to the storage threshold value of the process memory, and continuing to run the service process based on the configuration file. According to the technical scheme, after the service process obtains the configuration file, the service process can be operated based on the configuration file, the service process needs to read the configuration items contained in the configuration file in the operation process, the reading times of the configuration items can be determined certainly, the configuration items are sequenced according to the reading times, the configuration items with a large reading time are cached in the process memory, the configuration items with a small reading time are cached in the local file, specifically, the configuration items can be cached in the process memory sequentially according to the sequence from high to low of the reading times, after the data quantity of the configuration items cached in the process memory is larger than the storage threshold value of the process memory, the rest configuration items are cached in the local file, and the service process is operated based on the configuration file continuously. Under the condition of reducing the storage pressure of a process memory, the smooth operation of the service process is realized, the loading time of the service process is reduced, and the operation efficiency of the service process is improved.
Example two
Fig. 2 is a flowchart of a configuration file management method according to a second embodiment of the present invention, which is embodied on the basis of the second embodiment. As shown in fig. 2, in this embodiment, the method may further include:
step 210, obtaining a configuration file, and sorting each configuration item contained in the configuration file based on the reading times in the process of running the service process based on the configuration file.
In one embodiment, step 210 may specifically include:
and acquiring a configuration file, and sequencing each configuration item contained in the configuration file based on the reading times if the data volume of the configuration file is greater than the maximum memory capacity of a process memory in the process of running the service process based on the configuration file.
If the data volume of the configuration file is greater than the maximum memory capacity of the process memory, a part of configuration items contained in the configuration file can be cached in the process memory, and the other part of configuration items can be cached in the local file.
Specifically, in the process of acquiring the configuration file and running based on the configuration file, the service process may compare the data volume of the configuration file with the maximum memory volume of the process memory, and if the data volume of the configuration file is less than or equal to the maximum memory volume of the process memory, the configuration items included in the configuration file may be all cached in the process memory, so as to implement normal high-speed running of the service process; if the data volume of the configuration file is larger than the maximum storage volume of the process memory, a part of configuration items contained in the configuration file can be cached in the process memory, and the other part of configuration items can be cached in the local file. At this time, the configuration items included in the configuration file may be sorted based on the number of readings.
In the embodiment of the present invention, when the data size of the configuration file is greater than the maximum storage amount of the service process, the configuration items included in the configuration file may be sorted based on the number of times of reading, so that a part of the configuration items included in the configuration file is cached in the process memory, and another part of the configuration items is cached in the local file.
In another embodiment, step 210 may further include:
and acquiring a configuration file, and sequencing each configuration item contained in the configuration file based on the reading times at preset time intervals in the process of running the service process based on the configuration file.
The preset time may be determined according to actual needs, and is not specifically limited herein, for example, the preset time may be one week.
Specifically, in the process of acquiring the configuration file and running based on the configuration file, the service process may sequence the configuration items included in the configuration file based on the number of times of reading every preset time.
In practical application, the data amount of the configuration items cached in the process memory can be checked at preset time intervals, and if the data amount is larger than the maximum storage amount of the process memory, the configuration items contained in the configuration file are sorted based on the reading times.
In the implementation manner of the embodiment of the invention, each configuration item contained in the configuration file is sorted based on the reading times at intervals of preset time; of course, the data amount of the configuration items cached in the process memory may also be checked every preset time, and if the data amount is greater than the maximum storage amount of the process memory, the configuration items included in the configuration file are sorted based on the number of times of reading. So as to realize that one part of the configuration items contained in the configuration file is cached to the process memory, and the other part is cached to the local file.
In one embodiment, in the process of running the service process based on the configuration file, the method comprises the following steps:
and running a preset function, and updating the reading times of the configuration items based on the preset function when the configuration items are read.
The service process may be an agent, which may run in a microservice process, such as ms 1. When ms1 is started, the function of agent can be executed, and based on the function connection to the remote configuration center, the configuration items contained in the configuration file of the remote configuration center are loaded into the memory space of ms1, i.e. the process memory. When the configuration file contains two configuration items, it can be expressed as: p1: v1, p2: v2, agent can load p1 and p2 into process memory, and put the contents of p1 and p2 into maps of a plurality of values corresponding to one key, namely: { p1: { v1,0}, p2: { v2,2} }. Where 0 denotes the number of reads of p1 and 2 denotes the number of reads of p 2.
The preset function provided by agent may be readprofile (string key). When the ms1 needs to be run by using the configuration item p1, the preset function provided by the agent can be called: readProfile (p1), which internally retrieves the map for storing configuration information according to the configuration item p1, for example, according to p1, the function finds that value is "v 1, 0" increases 0 to 1, stores map, and then returns v 1.
In the implementation manner of the embodiment of the present invention, the number of times of reading the configuration item may be determined and updated according to a preset function provided by the service process.
Preferably, updating the number of times of reading of the configuration item based on a preset function includes:
updating the reading times of the configuration items cached to the process memory based on a preset function; and updating the reading times of the configuration items cached to the local file based on a preset function.
During the operation of the service process, the configuration item may be cached in the process memory or may be cached in the local file.
Specifically, when the service process runs and the configuration item cached in the service process is read, the reading times corresponding to the configuration item are updated based on a preset function; when the configuration item cached in the local file is read, the configuration item needs to be loaded into the process memory, and the read times corresponding to the configuration item are updated.
In the embodiment of the invention, the service process can call the preset function in the running process to determine the reading times of each configuration item in the running process, and then the configuration items are sequenced according to the reading times to determine the importance of each configuration item on the running of the service process.
And step 220, sequentially caching the configuration items to the process memory according to the sequence of the reading times from high to low.
And step 230, caching the rest configuration items to the local file after the data volume of the cached configuration items is greater than or equal to the storage threshold of the process memory.
Step 240, determining similarity among the configuration items, and if the similarity of any plurality of configuration items is greater than a preset threshold, determining the configuration items as similar configuration items; any configuration item is retained while other configuration items are deleted.
Among the plurality of configuration items included in the configuration information, there may be a configuration item that realizes a similar function, or there may be a configuration item whose number of reads is 0. The configuration items with similar functions and the configuration items with the reading times of 0 both cause configuration file redundancy and influence the loading speed of the service process. If the configuration information contains a plurality of configuration items for realizing similar functions, the service process can preferentially read the configuration item with the minimum data volume.
Wherein, if configuration items with similar meanings, that is, configuration items realizing similar functions, exist in the configuration file, it can be determined that the configuration items are overlapped or similar. For example, a key representing a configuration item of an application logo name is defined as: logoname or logotitle. However, the two configuration items are overlapped similarly for the whole service process and can be unified into one.
Specifically, the similarity of each configuration item in the map may be determined, and specifically, the similarity of the key and the value in each configuration item may be determined. If the similarity of the key or value of any plurality of configuration items exceeds a preset percentage, determining that the configuration items are similar configuration items, and deleting other configuration items while keeping any configuration item at the moment so as to ensure that the redundant configuration items are deleted under the condition that the service process normally runs.
Of course, the preset percentage is not specifically limited herein, and may be determined according to actual requirements, for example, the preset percentage may be 90%.
In practical application, the data size of a plurality of similar configuration items can be determined, and the configuration item with the minimum data size is kept while other configuration items are deleted.
It should be noted that, configuration items with a reading frequency of 0 may also be deleted, so as to further reduce redundancy of the configuration file.
In the embodiment of the invention, the similarity before each configuration item is determined, the redundant configuration items with the similarity larger than the preset percentage are deleted, and the configuration items with the reading times of 0 are deleted, so that the cache pressure is reduced, and the loading speed of the service process is further improved.
And step 250, continuing to run the service process based on the configuration file.
In the embodiment of the invention, after the configuration items contained in the configuration file are cached and deleted, the service process can be continuously operated based on the configuration file.
Step 260, if the service process is a single-node service, before the service process is restarted, storing configuration items based on a local disk; and if the service process is cluster service, storing the configuration item based on a preset database before the service process is restarted.
In the embodiment of the invention, if the service process is a single-node process, before the service process is restarted, the information recorded by the map can be stored in the local disk, and after the service process is restarted, the local disk is reloaded; if the service process is cluster service, before any service process is restarted, the information recorded by the map can be stored in a preset database redis, and after the service process is restarted, reloading is carried out based on the redis.
The second configuration file management method provided by the embodiment of the invention comprises the following steps: acquiring a configuration file, and sequencing all configuration items contained in the configuration file based on the reading times in the process of running a service process based on the configuration file; sequentially caching all the configuration items to a process memory according to the sequence of the reading times from high to low; caching the rest configuration items to a local file after the data volume of the cached configuration items is larger than or equal to the storage threshold value of the process memory; determining the similarity among the configuration items, and if the similarity of any plurality of configuration items is greater than a preset threshold value, determining the configuration items as similar configuration items; deleting other configuration items while keeping any configuration item; continuing to run the service process based on the configuration file; if the service process is a single-node service, before the service process is restarted, storing a configuration item based on a local disk; and if the service process is cluster service, storing the configuration item based on a preset database before the service process is restarted. According to the technical scheme, after the service process obtains the configuration file, the service process can be operated based on the configuration file, the service process needs to read the configuration items contained in the configuration file in the operation process, the reading times of the configuration items can be determined certainly, the configuration items are sequenced according to the reading times, the configuration items with a large reading time are cached in the process memory, the configuration items with a small reading time are cached in the local file, specifically, the configuration items can be cached in the process memory sequentially according to the sequence from high to low of the reading times, after the data quantity of the configuration items cached in the process memory is larger than the storage threshold value of the process memory, the rest configuration items are cached in the local file, and the service process is operated based on the configuration file continuously. Under the condition of reducing the storage pressure of a process memory, the smooth operation of the service process is realized, the loading time of the service process is reduced, and the operation efficiency of the service process is improved.
In addition, redundant configuration items can be deleted based on the similarity between the configuration items, so that the cache pressure is reduced, the loading speed of a service process is further increased, and the loading speed of an e-government system is further increased.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a configuration file management apparatus according to a third embodiment of the present invention, where the apparatus is applicable to a case where a data size of a configuration file is larger than a local service process memory. The apparatus may be implemented by software and/or hardware and is typically integrated in a server.
As shown in fig. 3, the apparatus includes:
a sorting module 310, configured to obtain a configuration file, and sort, based on the number of reads, each configuration item included in the configuration file in a process of running a service process based on the configuration file;
the first cache module 320 is configured to cache each configuration item to the process memory in sequence according to a sequence from high to low read times;
the second caching module 330 is configured to cache the remaining configuration items to a local file after the data amount of the cached configuration items is greater than or equal to the storage threshold of the process memory, and continue to run the service process based on the configuration file.
The configuration file management apparatus provided in this embodiment obtains a configuration file, and sorts, based on the number of times of reading, each configuration item included in the configuration file in a process of running a service process based on the configuration file; sequentially caching the configuration items to a process memory according to the sequence of the reading times from high to low; after the data volume of the cached configuration items is larger than or equal to the storage threshold value of the process memory, caching the rest configuration items to a local file, and continuing to run the service process based on the configuration file. According to the technical scheme, after the service process obtains the configuration file, the service process can be operated based on the configuration file, the service process needs to read the configuration items contained in the configuration file in the operation process, the reading times of the configuration items can be determined certainly, the configuration items are sequenced according to the reading times, the configuration items with a large reading time are cached in the process memory, the configuration items with a small reading time are cached in the local file, specifically, the configuration items can be cached in the process memory sequentially according to the sequence from high to low of the reading times, after the data quantity of the configuration items cached in the process memory is larger than the storage threshold value of the process memory, the rest configuration items are cached in the local file, and the service process is operated based on the configuration file continuously. Under the condition of reducing the storage pressure of a process memory, the smooth operation of the service process is realized, the loading time of the service process is reduced, and the operation efficiency of the service process is improved.
On the basis of the foregoing embodiment, the sorting module 310 is specifically configured to:
and acquiring a configuration file, and sequencing each configuration item contained in the configuration file based on the reading times if the data volume of the configuration file is greater than the maximum memory volume of the process memory in the process of operating the service process based on the configuration file.
On the basis of the foregoing embodiment, the sorting module 310 is further configured to:
and acquiring a configuration file, and sequencing each configuration item contained in the configuration file based on the reading times at preset time intervals in the process of operating the service process based on the configuration file.
On the basis of the above embodiment, in the process of running the service process based on the configuration file, the sorting module 310 includes:
and running a preset function, and updating the reading times of the configuration items based on the preset function when the configuration items are read.
Preferably, updating the read times of the configuration items based on the preset function includes:
updating the reading times of the configuration items cached to the process memory based on the preset function;
updating the reading times of the configuration items cached to the local file based on the preset function.
On the basis of the above embodiment, the apparatus further includes:
the determining module is used for determining the similarity between the configuration items, and if the similarity of any plurality of the configuration items is larger than a preset threshold value, determining the configuration items as similar configuration items;
and the execution module is used for keeping any configuration item and deleting other configuration items at the same time.
On the basis of the above embodiment, the apparatus further includes:
the storage module is used for storing the configuration items based on a local disk before the service process is restarted if the service process is a single-node service; and if the service process is cluster service, storing the configuration item based on a preset database before the service process is restarted.
The configuration file management device provided by the embodiment of the invention can execute the configuration file management method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 4 is a schematic structural diagram of a server according to a fourth embodiment of the present invention. Fig. 4 shows a block diagram of an exemplary server 4 suitable for use in implementing embodiments of the present invention. The server 4 shown in fig. 4 is only an example, and should not bring any limitation to the function and the scope of use of the embodiment of the present invention.
As shown in fig. 4, the server 4 is in the form of a general purpose computing electronic device. The components of the server 4 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
The server 4 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by server 4 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. The server 4 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 4, and commonly referred to as a "hard drive"). Although not shown in FIG. 4, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. System memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in system memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
The server 4 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with the server 4, and/or with any devices (e.g., network card, modem, etc.) that enable the server 4 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the server 4 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet) via the network adapter 20. As shown in FIG. 4, the network adapter 20 communicates with the other modules of the server 4 via the bus 18. It should be appreciated that although not shown in FIG. 4, other hardware and/or software modules may be used in conjunction with the server 4, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and page displays by running programs stored in the system memory 28, for example, implementing the configuration file management method provided by the embodiment, wherein the method includes:
acquiring a configuration file, and sequencing all configuration items contained in the configuration file based on the reading times in the process of running a service process based on the configuration file;
sequentially caching the configuration items to a process memory according to the sequence of the reading times from high to low;
after the data volume of the cached configuration items is larger than or equal to the storage threshold value of the process memory, caching the rest configuration items to a local file, and continuing to run the service process based on the configuration file.
Of course, those skilled in the art can understand that the processor may also implement the technical solution of the configuration file management method provided in any embodiment of the present invention.
EXAMPLE five
An embodiment five of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a method for managing a configuration file, for example, the method provided by the embodiment, where the method includes:
acquiring a configuration file, and sequencing all configuration items contained in the configuration file based on the reading times in the process of running a service process based on the configuration file;
sequentially caching the configuration items to a process memory according to the sequence of the reading times from high to low;
after the data volume of the cached configuration items is larger than or equal to the storage threshold value of the process memory, caching the rest configuration items to a local file, and continuing to run the service process based on the configuration file.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer-readable storage medium may be, for example but not limited to: an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It will be understood by those skilled in the art that the modules or steps of the invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of computing devices, and optionally they may be implemented by program code executable by a computing device, such that it may be stored in a memory device and executed by a computing device, or it may be separately fabricated into various integrated circuit modules, or it may be fabricated by fabricating a plurality of modules or steps thereof into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments illustrated herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A method for profile management, the method comprising:
acquiring a configuration file, and sequencing all configuration items contained in the configuration file based on the reading times in the process of running a service process based on the configuration file;
sequentially caching the configuration items to a process memory according to the sequence of the reading times from high to low;
after the data volume of the cached configuration items is larger than or equal to the storage threshold value of the process memory, caching the rest configuration items to a local file, and continuing to run the service process based on the configuration file.
2. The method according to claim 1, wherein in a process of running a service process based on the configuration file, sorting the configuration items included in the configuration file based on the number of reads includes:
in the process of running the service process based on the configuration file, if the data volume of the configuration file is greater than the maximum memory volume of the process memory, sequencing each configuration item contained in the configuration file based on the reading times.
3. The method according to claim 1, wherein in a process of running a service process based on the configuration file, the configuration items included in the configuration file are sorted based on the number of readings, and the method further comprises:
and sequencing each configuration item contained in the configuration file based on the reading times at preset time intervals in the process of running the service process based on the configuration file.
4. The method according to claim 1, wherein in the process of running a service process based on the configuration file, the method comprises:
and running a preset function, and updating the reading times of the configuration items based on the preset function when the configuration items are read.
5. The method according to claim 4, wherein updating the number of readings of the configuration item based on the preset function comprises:
updating the reading times of the configuration items cached to the process memory based on the preset function;
updating the reading times of the configuration items cached to the local file based on the preset function.
6. The profile management method according to claim 1, further comprising:
determining the similarity between the configuration items, and if the similarity of any plurality of configuration items is greater than a preset threshold value, determining the configuration items as similar configuration items;
and deleting other configuration items while keeping any configuration item.
7. The profile management method according to claim 1, further comprising:
if the service process is a single-node service, storing the configuration item based on a local disk before the service process is restarted;
and if the service process is cluster service, storing the configuration item based on a preset database before the service process is restarted.
8. A profile management apparatus, comprising:
the sequencing module is used for acquiring the configuration files and sequencing the configuration items contained in the configuration files based on the reading times in the process of running the service process based on the configuration files;
the first cache module is used for caching the configuration items to a process memory in sequence according to the sequence of the reading times from high to low;
and the second caching module is used for caching the rest configuration items to a local file and continuing to run the service process based on the configuration file after the data volume of the cached configuration items is greater than or equal to the storage threshold value of the process memory.
9. A server, characterized in that the server comprises:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the profile management method according to any one of claims 1 to 7.
10. A storage medium containing computer-executable instructions for performing the profile management method of any one of claims 1-7 when executed by a computer processor.
CN202111475890.8A 2021-12-06 2021-12-06 Configuration file management method, device, server and storage medium Pending CN114064154A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111475890.8A CN114064154A (en) 2021-12-06 2021-12-06 Configuration file management method, device, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111475890.8A CN114064154A (en) 2021-12-06 2021-12-06 Configuration file management method, device, server and storage medium

Publications (1)

Publication Number Publication Date
CN114064154A true CN114064154A (en) 2022-02-18

Family

ID=80228693

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111475890.8A Pending CN114064154A (en) 2021-12-06 2021-12-06 Configuration file management method, device, server and storage medium

Country Status (1)

Country Link
CN (1) CN114064154A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109582386A (en) * 2018-11-09 2019-04-05 聚好看科技股份有限公司 Service starting processing method, device, electronic equipment and readable storage medium storing program for executing
US20200364143A1 (en) * 2019-05-15 2020-11-19 Adp, Llc Externalized configurations and caching solution
CN112558868A (en) * 2020-12-07 2021-03-26 炬芯科技股份有限公司 Method, device and equipment for storing configuration data
CN113094117A (en) * 2021-04-23 2021-07-09 上海传英信息技术有限公司 Processing method, terminal and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109582386A (en) * 2018-11-09 2019-04-05 聚好看科技股份有限公司 Service starting processing method, device, electronic equipment and readable storage medium storing program for executing
US20200364143A1 (en) * 2019-05-15 2020-11-19 Adp, Llc Externalized configurations and caching solution
CN112558868A (en) * 2020-12-07 2021-03-26 炬芯科技股份有限公司 Method, device and equipment for storing configuration data
CN113094117A (en) * 2021-04-23 2021-07-09 上海传英信息技术有限公司 Processing method, terminal and storage medium

Similar Documents

Publication Publication Date Title
US9934005B2 (en) Dynamically building locale objects or subsections of locale objects based on historical data
CN109559234B (en) Block chain state data storage method, equipment and storage medium
CN110908697B (en) Resource packaging method, device, server and storage medium
US20140046920A1 (en) System and Method for Analyzing Available Space in Data Blocks
CN109471851B (en) Data processing method, device, server and storage medium
CN110908707B (en) Resource packaging method, device, server and storage medium
CN107451062B (en) User interface traversal test method, device, server and storage medium
CN111949710B (en) Data storage method, device, server and storage medium
CN113111131B (en) Method and system for achieving Neo4j data synchronization based on Flink, and integration method and device
WO2023040399A1 (en) Service persistence method and apparatus
CN109284108B (en) Unmanned vehicle data storage method and device, electronic equipment and storage medium
GB2530052A (en) Outputting map-reduce jobs to an archive file
CN115408391A (en) Database table changing method, device, equipment and storage medium
CN109033456B (en) Condition query method and device, electronic equipment and storage medium
US10585618B2 (en) Providing access to virtual sequential access volume
CN110489425B (en) Data access method, device, equipment and storage medium
CN114064154A (en) Configuration file management method, device, server and storage medium
CN114547086A (en) Data processing method, device, equipment and computer readable storage medium
US10055304B2 (en) In-memory continuous data protection
CN110659312B (en) Data processing method, device, equipment and computer storage medium
CN114036085A (en) Multitask read-write scheduling method based on DDR4, computer equipment and storage medium
CN111782834A (en) Image retrieval method, device, equipment and computer readable storage medium
US11748352B2 (en) Dynamical database system resource balance
CN113127238B (en) Method and device for exporting data in database, medium and equipment
CN117312817A (en) Service ordering method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination