CN109683984B - Data hot loading method and device, electronic equipment and computer readable storage medium - Google Patents

Data hot loading method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN109683984B
CN109683984B CN201811536447.5A CN201811536447A CN109683984B CN 109683984 B CN109683984 B CN 109683984B CN 201811536447 A CN201811536447 A CN 201811536447A CN 109683984 B CN109683984 B CN 109683984B
Authority
CN
China
Prior art keywords
cache region
reference count
foreground
data
preset operation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811536447.5A
Other languages
Chinese (zh)
Other versions
CN109683984A (en
Inventor
刘兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lazas Network Technology Shanghai Co Ltd
Original Assignee
Lazas Network Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lazas Network Technology Shanghai Co Ltd filed Critical Lazas Network Technology Shanghai Co Ltd
Priority to CN201811536447.5A priority Critical patent/CN109683984B/en
Publication of CN109683984A publication Critical patent/CN109683984A/en
Application granted granted Critical
Publication of CN109683984B publication Critical patent/CN109683984B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44521Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/526Mutual exclusion algorithms

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The embodiment of the disclosure discloses a data hot loading method and device, electronic equipment and a computer readable storage medium. The data hot loading method comprises the following steps: responding to a data hot loading event, putting data to be loaded into a background cache region, and executing a first preset operation on a first reference count corresponding to the background cache region; wherein the first reference count is set to an initial value prior to performing the first preset operation; performing a second preset operation on a second reference count corresponding to the foreground cache region; wherein the first preset operation and the second preset operation are complementary operations; replacing the foreground cache region with the background cache region, and replacing the second reference count with the first reference count. By the mode, the blocking waiting of two processes of reading and writing data in the replacement process of the foreground and background cache regions is avoided, and any process can be started without waiting for the completion of the other process, so that the execution efficiency is improved.

Description

Data hot loading method and device, electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a data hot loading method and apparatus, an electronic device, and a computer-readable storage medium.
Background
Data loading of a process can be divided into cold loading and hot loading. When data are loaded in a cold mode, the process is required to be restarted, and the data recording process can be completed; and when the data is loaded in a hot mode, the data loading can be completed without restarting the process. For data hot loading, the problem is how to replace old data with new data and ensure that the replacement process is safe and efficient.
Replacement process security means that the data cannot be corrupted. For example, during the process of replacing old data with new data, there may be processes reading the old data at the same time, which may result in the read data being incomplete (one part is old data and the other part is replaced new data), or even being corrupted (data is damaged and cannot be used). The efficient replacement process means that the replacement process is completed as quickly as possible, so that the blocking waiting in the replacement process is avoided, and the process is guaranteed to use new data as soon as possible.
Disclosure of Invention
The embodiment of the disclosure provides a data hot loading method and device, electronic equipment and a computer readable storage medium.
In a first aspect, a data hot loading method is provided in an embodiment of the present disclosure.
Specifically, the data hot loading method includes:
responding to a data hot loading event, putting data to be loaded into a background cache region, and executing a first preset operation on a first reference count corresponding to the background cache region; wherein the first reference count is set to an initial value prior to performing the first preset operation;
performing a second preset operation on a second reference count corresponding to the foreground cache region; wherein the first preset operation and the second preset operation are complementary operations;
replacing the foreground cache region with the background cache region, and replacing the second reference count with the first reference count.
With reference to the first aspect, in a first implementation manner of the first aspect, the data hot loading method further includes:
releasing the foreground cache in response to an event that the second reference count is updated to an initial value.
With reference to the first aspect and/or the first implementation manner of the first aspect, in a second implementation manner of the first aspect, the first preset operation is an operation of adding a preset threshold, and the second preset operation bit is an operation of subtracting the preset threshold.
With reference to the first aspect and/or the first implementation manner of the first aspect, in a third implementation manner of the first aspect, the data hot loading method further includes:
performing the first preset operation on the second reference count in response to a read start operation;
and returning the address of the foreground cache region.
With reference to the first aspect and/or the first implementation manner of the first aspect, in a fourth implementation manner of the first aspect, the data hot loading method further includes:
in response to a read end operation, performing the second preset operation on the second reference count.
With reference to the first aspect and/or the first implementation manner of the first aspect, in a fifth implementation manner of the first aspect, the replacing the foreground cache region with the background cache region includes:
and replacing the foreground cache region with the background cache region by utilizing atomic operation.
In a second aspect, a data hot loading device is provided in the embodiments of the present disclosure.
Specifically, the data hot loading device includes:
the first response module is configured to respond to a data hot loading event, place data to be loaded into a background cache region, and execute a first preset operation on a first reference count corresponding to the background cache region; wherein the first reference count is set to an initial value prior to performing the first preset operation;
the first operation module is configured to perform a second preset operation on a second reference count corresponding to the foreground cache region; wherein the first preset operation and the second preset operation are complementary operations;
a replacement module configured to replace the foreground cache area with the background cache area and replace the second reference count with the first reference count.
With reference to the second aspect, in a first implementation manner of the second aspect, the data hot loading apparatus further includes:
a second response module configured to release the foreground cache in response to an event that the second reference count is updated to an initial value.
With reference to the second aspect and/or the first implementation manner of the second aspect, in a second implementation manner of the second aspect, the first preset operation is an operation of adding a preset threshold, and the second preset operation is an operation of subtracting the preset threshold.
With reference to the second aspect and/or the first implementation manner of the second aspect, in a third implementation manner of the second aspect, the data hot loading apparatus further includes:
a third response module configured to perform the first preset operation on the second reference count in response to a read start operation;
a return module configured to return an address of the foreground cache.
With reference to the second aspect and/or the first implementation manner of the second aspect, in a fourth implementation manner of the second aspect of the present disclosure, the data hot loading apparatus further includes:
a fourth response module configured to perform the second preset operation on the second reference count in response to a read end operation.
In combination with the second aspect and/or the first implementation manner of the second aspect, in a fifth implementation manner of the second aspect, the replacement module includes:
a replacement submodule configured to replace the foreground cache region with the background cache region using an atomic operation.
The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above-described functions.
In one possible design, the data hot-loading apparatus includes a memory and a processor, the memory is used for storing one or more computer instructions that support the data hot-loading apparatus to execute the data hot-loading method in the first aspect, and the processor is configured to execute the computer instructions stored in the memory. The data hot-loading device can also comprise a communication interface for communicating with other equipment or a communication network.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including a memory and a processor; wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method steps of the first aspect.
In a fourth aspect, the disclosed embodiments provide a computer-readable storage medium for storing computer instructions for a data hot-loading apparatus, which contains computer instructions for performing the data hot-loading method in the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in the data hot loading process, the new data to be loaded is firstly stored in the background cache region, a first reference count is set for the background cache region, a first preset operation is executed on the basis of an initial value of the first reference count, then a second preset operation complementary to the first preset operation is executed on a second reference count corresponding to the foreground cache region, the background cache region is replaced by the foreground cache region, the first reference count is replaced by the second application count, and the data hot loading process is completed. Through the embodiment of the disclosure, new data can be stored in the background cache region in a mode of double cache regions (foreground cache region and background cache region), the number of times of the background cache region and the foreground cache region being quoted is identified by using the first quote count and the second quote count, the blocking waiting of two processes of reading and writing data in the replacement process of the foreground cache region and the background cache region is avoided, and any process can be started without waiting for the completion of the other process, so that the execution efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
Other features, objects, and advantages of the present disclosure will become more apparent from the following detailed description of non-limiting embodiments when taken in conjunction with the accompanying drawings. In the drawings:
FIG. 1 illustrates a flow diagram of a method of data hot-loading according to an embodiment of the present disclosure;
FIG. 2 illustrates a flow diagram of a method of data hot-loading according to another embodiment of the present disclosure;
FIG. 3 is a block diagram of a data hot-loading device according to an embodiment of the present disclosure;
FIG. 4 is a block diagram of a data hot-loading device according to another embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device suitable for implementing a data hot loading method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily implement them. Also, for the sake of clarity, parts not relevant to the description of the exemplary embodiments are omitted in the drawings.
In the present disclosure, it is to be understood that terms such as "including" or "having," etc., are intended to indicate the presence of the disclosed features, numbers, steps, behaviors, components, parts, or combinations thereof, and are not intended to preclude the possibility that one or more other features, numbers, steps, behaviors, components, parts, or combinations thereof may be present or added.
It should be further noted that the embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
In the related art, the data hot loading technology can secure the replacement process by locking, which includes using a mutual exclusion lock, a read-write lock, and double buffering based on a thread local lock. Suppose that n existing threads need to read data at the same time, and at the same time, 1 thread performs a data hot-loading operation, and the existing data needs to be replaced by new data. In this scenario, there are n +1 operations: n read operations, 1 replacement operation.
When the replacement process is secured using a mutex lock, then the n +1 operations are completely mutex. While 1 operation is in progress, the remaining n operations must wait until the ongoing operation is completed before proceeding to the next operation.
When the read-write lock is used to ensure the safety of the replacement process, n read operations can be simultaneously carried out, but the replacement operation and the read operation are mutually exclusive and cannot be simultaneously carried out.
When the double-buffering technology based on the thread local lock is used for ensuring the safety of the replacement process, the problems of the read-write lock also exist, n read operations can be carried out simultaneously, but the replacement operation and the read operation are mutually exclusive and cannot be carried out simultaneously.
The technical scheme of using the lock to ensure the safety of the replacement process can ensure the safety of the replacement process, but cannot avoid the blockage of the replacement process: when the read operation is performed, the replacement operation must wait; conversely, while the replacement operation is in progress, the read operation must also wait, thereby wasting computing resources.
FIG. 1 shows a flow diagram of a data hot-loading method according to an embodiment of the present disclosure. As shown in fig. 1, the data hot loading method includes the following steps S101 to S103:
in step S101, in response to a data loading event, placing data to be loaded into a background cache region, and performing a first preset operation on a first reference count corresponding to the background cache region; wherein the first reference count is set to an initial value prior to performing the first preset operation;
in step S102, a second preset operation is performed on a second reference count corresponding to the foreground cache region; wherein the first preset operation and the second preset operation are complementary operations;
in step S103, the foreground cache region is replaced with the background cache region, and the second reference count is replaced with the first reference count.
In this embodiment, the current cache data is stored in the foreground cache region, and one or more processes may read the cache data stored in the foreground cache region and perform respective processing. After new cache data, namely data to be loaded, is generated, a background cache region can be newly created by a double-cache structure process, the new cache data is placed into the background cache region, and a first reference count is set for the background cache region. The first reference count is used to indicate how many processes the background cache is referenced by, including the reference of a double-buffered structure process. The initial value of the first reference count may be set to 0 or another initial value, opening up the background cache region is a behavior of the double-buffer structure process, and the background cache region may be considered to be referred to by the double-buffer structure process, so that a first preset operation may be performed on the first reference count to indicate that the background cache region is referred to by the double-buffer structure process. Because the old cache data to be replaced is stored in the foreground cache region, the second preset operation can be executed through the second reference count to identify that the reference of the double-buffer structure process to the foreground cache region is finished (when the foreground cache region is initially established, the front cache region is similar to the background cache region and is also opened up by the double-buffer structure process, and the front cache region is formed by replacing the original cache region, the first reference count corresponding to the front cache region is also executed by the first preset operation when the front cache region is established, and at the moment, the purpose of executing the second preset operation is to indicate that the reference of the double-buffer structure process to the foreground cache region is finished). The purpose of the first preset operation is to enable the second reference count, which has performed the first preset operation, to identify that the foreground cache region is referenced by a new process, and the purpose of the second preset operation is to enable the second reference count, which has performed the second preset operation, to identify that a process that has previously referenced the foreground cache region has ended referencing the foreground cache region, where the referencing of the cache region can be understood as a read-write operation of the process on the cache region. The first preset operation and the second preset operation are thus a pair of complementary operations. In one embodiment, the first predetermined operation may be a simple operation of adding a predetermined threshold, such as 1, and the second predetermined operation may be a simple operation of subtracting a predetermined threshold, such as 1.
After the background cache region and the corresponding first reference count and second reference count are set, the foreground cache region can be replaced by the background cache region so as to serve subsequent processes, that is, when a new process needs to read data in the cache region, a pointer of the background cache region (which is the replaced foreground cache region) can be returned to the new process, so that the new process reads the data in the background cache region. It should be noted that, replacing the foreground cache region with the background cache region may be understood as address replacement of the cache region at the application level. For example, a copy of the source data a is cached in the foreground cache region, an application corresponding to the source data a stores an address of the foreground cache region a, and if a process reads the source data a, the corresponding application can return an address pointer of the foreground cache region to the reading process, so that the reading process reads the copy of the source data a from the foreground cache region; if the source data a is updated, the updated copy of the source data a may be stored in the background cache region, and replacing the foreground cache region with the background cache region actually replaces the address pointer of the foreground cache region stored by the corresponding application with the address pointer of the background cache region, so that when a new reading process reads the copy of the source data a, the corresponding application may return the address pointer of the background cache region to the reading process. Because the foreground cache region is replaced by the background cache region (which can be understood as the address pointer of the foreground cache region is replaced by the address pointer of the background cache region in implementation), the corresponding reference count is also replaced by the second reference count, that is, the replaced foreground cache region is the original background cache region, and the first reference count is the original second reference count. Therefore, the data hot loading process is realized, the new process can read the newly loaded data in the new foreground cache region, the original process can continue to read the data in the original foreground cache region, and the data hot loading process without mutual waiting for the reading operation and the replacing operation is realized. It should be noted that, after the foreground cache region of the background cache region is utilized, if the second reference count (i.e., the second reference count corresponding to the original foreground cache region) is not updated to the initial value, the foreground cache region is not released, the reading process that is reading the foreground cache region can still continue to read the data in the foreground cache region, until the reading processes complete the reading operation on the foreground cache region, the second reference count (i.e., the second reference count corresponding to the original foreground cache region) is updated to the initial value, and the foreground cache region is released.
In the data hot loading process, the new data to be loaded is firstly stored in the background cache region, a first reference count is set for the background cache region, a first preset operation is executed on the basis of an initial value of the first reference count, then a second preset operation complementary to the first preset operation is executed on a second reference count corresponding to the foreground cache region, the background cache region is replaced by the foreground cache region, the first reference count is replaced by the second application count, and the data hot loading process is completed. Through the embodiment of the disclosure, new data can be stored in the background cache region in a mode of double cache regions (foreground cache region and background cache region), the number of times of the background cache region and the foreground cache region being quoted is identified by using the first quote count and the second quote count, the blocking waiting of two processes of reading and writing data in the replacement process of the foreground cache region and the background cache region is avoided, and any process can be started without waiting for the completion of the other process, so that the execution efficiency is improved.
In an optional implementation manner of this embodiment, the method further includes the following steps:
releasing the foreground cache in response to an event that the second reference count is updated to an initial value.
In this optional implementation manner, when the second reference count corresponding to the foreground cache region is updated to the initial value, it can be seen that the foreground cache region is not referred by any reading process nor by the data loading process (i.e. there is new cache data already, and the cache data in the foreground cache region is old data), and at this time, the foreground cache region can be released. The event in which the second reference count is updated to the initial may occur after the background cache is opened up or before the foreground cache is replaced with the background cache. For example, after new cache data is generated, the double-buffer structure process stores the new cache data in a newly opened background cache region, at this time, if the foreground cache region is not referenced by any data reading process (i.e., no data reading process is reading data in the foreground cache region), at this time, when step S102 is executed, the second reference count may be updated to an initial value, and if the foreground cache region is referenced by any one or more data reading processes (i.e., there is at least one data reading process reading data in the foreground cache region), at this time, when step S102 is executed, the second reference count may not be updated to the initial value. In this way, the foreground cache may be released when it is not referenced by any process.
In an optional implementation manner of this embodiment, as shown in fig. 2, the method further includes the following steps S201 to S202:
in step S201, in response to a read start operation, performing the first preset operation on the second reference count;
in step S202, the address of the foreground buffer is returned.
In this optional implementation manner, when a data reading process reads the cache data in the foreground cache region, a read start operation is initiated, the response process executes a first preset operation, for example, an operation of adding 1, on the second reference count, and returns the address of the foreground cache region to the data reading process, so that the data reading process reads the cache data in the foreground cache region.
The data reading process is exemplified below.
When the multiple processes read the cache data in the foreground cache region at the same time, the atomic operation can be used to add 1 to the second reference count of the foreground cache region, and directly return the address of the foreground data buffer region to the data reading process. And a plurality of processes can read data simultaneously without mutual influence. At the beginning, the second reference count value of the foreground cache region is 1 (the initial value is 0, the first preset operation is executed once when the foreground cache region is opened), which means that only the foreground cache region is referenced in the double-buffer structure. When the first data reading process reads the cache data in the foreground cache region, the atomic operation is used to add 1 to the second reference count value, at this time, the second reference count value is 2 (indicating that the double-buffer structure and one data reading process reference the foreground cache region), and the address of the foreground cache region is returned to the data reading process for the data reading process to read. And if the n data reading processes read simultaneously, respectively adding 1 to the second reference count, wherein after the operation is finished, the second reference count value is n +1, which indicates that the n data reading processes and the double-buffer structure reference the foreground cache region. And after the data reading process finishes reading, subtracting 1 from the second reference count by using an atomic operation. If the second reference count value becomes 0, releasing the foreground cache region; if the second reference count is not 0, then no action is taken.
In an optional implementation manner of this embodiment, the method further includes the following steps:
in response to a read end operation, performing the second preset operation on the second reference count.
In this optional implementation, after the read data process completes reading the cache data in the foreground cache region, a read end operation is initiated; at this time, a second preset operation, for example, an operation of subtracting 1, may be executed on the second reference count corresponding to the foreground cache region, indicating that the data reading process has finished referencing the foreground cache region.
In an optional implementation manner of this embodiment, the step S103 of replacing the foreground buffer with the background buffer further includes the following steps:
and replacing the foreground cache region with the background cache region by utilizing atomic operation.
In this alternative implementation, atomic operations are used to execute when the foreground cache area is replaced with the background cache area. Atomic operations refer to operations that are not interrupted by a thread scheduling mechanism; this operation, once started, runs to the end without any switching in between. The replacement process of the foreground cache region and the background cache region cannot be interrupted through atomic operation, and further, the data reading error event can be caused.
In an optional implementation manner, the first preset operation and/or the second preset operation performed on the first reference count and/or the second reference count are/is also performed by using an atomic operation, so as to avoid generating a data reading error.
The security and efficiency of the data hot loading scheme proposed by the embodiment of the present disclosure are illustrated by specific examples below:
when a plurality of data reading processes read data simultaneously, the second reference count of the foreground cache region is added with 1, then the address of the foreground cache region is returned to the data reading processes, after the data reading processes read the data, the second reference count is subtracted by 1, and only when the second reference count is subtracted to 0, the foreground cache region is released. Therefore, the data can be read safely by a plurality of data reading processes at the same time, and the problem of hanging pointers can not occur.
When the data is hot loaded, the original cache data is still stored in the foreground cache region, the updated cache data is written into the background cache region, and the foreground cache region and the background cache region are mutually independent. Then, subtracting 1 from the second reference count, and if the second reference count is subtracted from 0, releasing the foreground cache region; and replacing the foreground cache region with the background cache region by using atomic operation, namely storing the pointer address of the background cache region as the cache address of the updated cache data for a subsequent process to read the updated cache data. It follows that the hot loading process and the replacement process are safe.
When the updated cache data in the background cache region is used for replacing the original cache data in the foreground cache region, the pointer is directly replaced by using the atomic operation, the replacement process is free from blockage, and the data reading process and the replacement process do not need to wait for other operations to be completed. It follows that the replacement process is efficient.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods.
Fig. 3 shows a block diagram of a data hot loading apparatus according to an embodiment of the present disclosure, which may be implemented as part or all of an electronic device by software, hardware, or a combination of the two. As shown in fig. 3, the data hot loading apparatus includes:
the first response module 301 is configured to, in response to a data loading event, place data to be loaded into a background cache region, and execute a first preset operation on a first reference count corresponding to the background cache region; wherein the first reference count is set to an initial value prior to performing the first preset operation;
the first operation module 302 is configured to perform a second preset operation on a second reference count corresponding to the foreground cache region; wherein the first preset operation and the second preset operation are complementary operations;
a replacement module 303 configured to replace the foreground cache region with the background cache region and replace the second reference count with the first reference count.
In this embodiment, the current cache data is stored in the foreground cache region, and one or more processes may read the cache data stored in the foreground cache region and perform respective processing. After new cache data, namely data to be loaded, is generated, a background cache region can be newly created by a double-cache structure process, the new cache data is placed into the background cache region, and a first reference count is set for the background cache region. The first reference count is used to indicate how many processes the background cache is referenced by, including the reference of a double-buffered structure process. The initial value of the first reference count may be set to 0 or another initial value, opening up the background cache region is a behavior of the double-buffer structure process, and the background cache region may be considered to be referred to by the double-buffer structure process, so that a first preset operation may be performed on the first reference count to indicate that the background cache region is referred to by the double-buffer structure process. Because the old cache data to be replaced is stored in the foreground cache region, the second preset operation can be executed through the second reference count to identify that the reference of the double-buffer structure process to the foreground cache region is finished (when the foreground cache region is initially established, the front cache region is similar to the background cache region and is also opened up by the double-buffer structure process, and the front cache region is formed by replacing the original cache region, the first reference count corresponding to the front cache region is also executed by the first preset operation when the front cache region is established, and at the moment, the purpose of executing the second preset operation is to indicate that the reference of the double-buffer structure process to the foreground cache region is finished). The purpose of the first preset operation is to enable the second reference count, which has performed the first preset operation, to identify that the foreground cache region is referenced by a new process, and the purpose of the second preset operation is to enable the second reference count, which has performed the second preset operation, to identify that a process that has previously referenced the foreground cache region has ended referencing the foreground cache region, where the referencing of the cache region can be understood as a read-write operation of the process on the cache region. The first preset operation and the second preset operation are thus a pair of complementary operations. In one embodiment, the first predetermined operation may be a simple operation of adding a predetermined threshold, such as 1, and the second predetermined operation may be a simple operation of subtracting a predetermined threshold, such as 1.
After the background cache region and the corresponding first reference count and second reference count are set, the foreground cache region can be replaced by the background cache region so as to serve subsequent processes, that is, when a new process needs to read data in the cache region, a pointer of the background cache region (which is the replaced foreground cache region) can be returned to the new process, so that the new process reads the data in the background cache region. It should be noted that, replacing the foreground cache region with the background cache region may be understood as address replacement of the cache region at the application level. For example, a copy of the source data a is cached in the foreground cache region, an application corresponding to the source data a stores an address of the foreground cache region a, and if a process reads the source data a, the corresponding application can return an address pointer of the foreground cache region to the reading process, so that the reading process reads the copy of the source data a from the foreground cache region; if the source data a is updated, the updated copy of the source data a may be stored in the background cache region, and replacing the foreground cache region with the background cache region actually replaces the address pointer of the foreground cache region stored by the corresponding application with the address pointer of the background cache region, so that when a new reading process reads the copy of the source data a, the corresponding application may return the address pointer of the background cache region to the reading process. Because the foreground cache region is replaced by the background cache region (which can be understood as the address pointer of the foreground cache region is replaced by the address pointer of the background cache region in implementation), the corresponding reference count is also replaced by the second reference count, that is, the replaced foreground cache region is the original background cache region, and the first reference count is the original second reference count. Therefore, the data hot loading process is realized, the new process can read the newly loaded data in the new foreground cache region, the original process can continue to read the data in the original foreground cache region, and the data hot loading process without mutual waiting for the reading operation and the replacing operation is realized. It should be noted that, after the foreground cache region of the background cache region is utilized, if the second reference count (i.e., the second reference count corresponding to the original foreground cache region) is not updated to the initial value, the foreground cache region is not released, the reading process that is reading the foreground cache region can still continue to read the data in the foreground cache region, until the reading processes complete the reading operation on the foreground cache region, the second reference count (i.e., the second reference count corresponding to the original foreground cache region) is updated to the initial value, and the foreground cache region is released.
In the data hot loading process of the embodiment of the present disclosure, the first response module 301 stores new data to be loaded in the background cache region first, sets a first reference count for the background cache region, and executes a first preset operation on the basis of an initial value of the first reference count, then the first operation module 302 executes a second preset operation complementary to the first preset operation on a second reference count corresponding to the foreground cache region, the replacement module 303 replaces the foreground cache region with the background cache region, and the second application count is replaced with the first reference count, thereby completing the data hot loading process. Through the embodiment of the disclosure, new data can be stored in the background cache region in a mode of double cache regions (foreground cache region and background cache region), the number of times of the background cache region and the foreground cache region being quoted is identified by using the first quote count and the second quote count, the blocking waiting of two processes of reading and writing data in the replacement process of the foreground cache region and the background cache region is avoided, and any process can be started without waiting for the completion of the other process, so that the execution efficiency is improved.
In an optional implementation manner of this embodiment, the data hot loading apparatus further includes:
a second response module configured to release the foreground cache in response to an event that the second reference count is updated to an initial value.
In this optional implementation manner, when the second reference count corresponding to the foreground cache region is updated to the initial value, it can be seen that the foreground cache region is not referred by any reading process nor by the data loading process (i.e. there is new cache data already, and the cache data in the foreground cache region is old data), and at this time, the foreground cache region can be released. The event in which the second reference count is updated to the initial may occur after the background cache is opened up or before the foreground cache is replaced with the background cache. For example, after new cache data is generated, the double-buffer structure process stores the new cache data in a newly opened background cache region, at this time, if the foreground cache region is not referenced by any data reading process (i.e., no data reading process is reading data in the foreground cache region), the first operation module 302 updates the second reference count to an initial value, and if the foreground cache region is referenced by any one or more data reading processes (i.e., at least one data reading process is reading data in the foreground cache region), the second reference count is not updated to the initial value. In this way, the foreground cache may be released when it is not referenced by any process.
In an optional implementation manner of this embodiment, as shown in fig. 4, the data hot loading apparatus further includes:
a third response module 401 configured to perform the first preset operation on the second reference count in response to a read start operation;
a return module 402 configured to return an address of the foreground cache.
In this optional implementation manner, when a data reading process reads the cache data in the foreground cache region, a read start operation is initiated, the response process executes a first preset operation, for example, an operation of adding 1, on the second reference count, and returns the address of the foreground cache region to the data reading process, so that the data reading process reads the cache data in the foreground cache region.
The data reading process is exemplified below.
When the multiple processes read the cache data in the foreground cache region at the same time, the atomic operation can be used to add 1 to the second reference count of the foreground cache region, and directly return the address of the foreground data buffer region to the data reading process. And a plurality of processes can read data simultaneously without mutual influence. At the beginning, the second reference count value of the foreground cache region is 1 (the initial value is 0, the first preset operation is executed once when the foreground cache region is opened), which means that only the foreground cache region is referenced in the double-buffer structure. When the first data reading process reads the cache data in the foreground cache region, the atomic operation is used to add 1 to the second reference count value, at this time, the second reference count value is 2 (indicating that the double-buffer structure and one data reading process reference the foreground cache region), and the address of the foreground cache region is returned to the data reading process for the data reading process to read. And if the n data reading processes read simultaneously, respectively adding 1 to the second reference count, wherein after the operation is finished, the second reference count value is n +1, which indicates that the n data reading processes and the double-buffer structure reference the foreground cache region. And after the data reading process finishes reading, subtracting 1 from the second reference count by using an atomic operation. If the second reference count value becomes 0, releasing the foreground cache region; if the second reference count is not 0, then no action is taken.
In an optional implementation manner of this embodiment, the data hot loading apparatus further includes:
a fourth response module configured to perform the second preset operation on the second reference count in response to a read end operation.
In this optional implementation, after the read data process completes reading the cache data in the foreground cache region, a read end operation is initiated; at this time, a second preset operation, for example, an operation of subtracting 1, may be executed on the second reference count corresponding to the foreground cache region, indicating that the data reading process has finished referencing the foreground cache region.
In an optional implementation manner of this embodiment, the replacing module 303 includes:
a replacement submodule configured to replace the foreground cache region with the background cache region using an atomic operation.
In this optional implementation, the replacement submodule executes the atomic operation when replacing the foreground cache region with the background cache region. Atomic operations refer to operations that are not interrupted by a thread scheduling mechanism; this operation, once started, runs to the end without any switching in between. The replacement process of the foreground cache region and the background cache region cannot be interrupted through atomic operation, and further, the data reading error event can be caused.
In an optional implementation manner, the first preset operation and/or the second preset operation performed on the first reference count and/or the second reference count are/is also performed by using an atomic operation, so as to avoid generating a data reading error.
The security and efficiency of the data hot loading scheme proposed by the embodiment of the present disclosure are illustrated by specific examples below:
when a plurality of data reading processes read data simultaneously, the second reference count of the foreground cache region is added with 1, then the address of the foreground cache region is returned to the data reading processes, after the data reading processes read the data, the second reference count is subtracted by 1, and only when the second reference count is subtracted to 0, the foreground cache region is released. Therefore, the data can be read safely by a plurality of data reading processes at the same time, and the problem of hanging pointers can not occur.
When the data is hot loaded, the original cache data is still stored in the foreground cache region, the updated cache data is written into the background cache region, and the foreground cache region and the background cache region are mutually independent. Then, subtracting 1 from the second reference count, and if the second reference count is subtracted from 0, releasing the foreground cache region; and replacing the foreground cache region with the background cache region by using atomic operation, namely storing the pointer address of the background cache region as the cache address of the updated cache data for a subsequent process to read the updated cache data. It follows that the hot loading process and the replacement process are safe.
When the updated cache data in the background cache region is used for replacing the original cache data in the foreground cache region, the pointer is directly replaced by using the atomic operation, the replacement process is free from blockage, and the data reading process and the replacement process do not need to wait for other operations to be completed. It follows that the replacement process is efficient.
Fig. 5 is a schematic structural diagram of an electronic device suitable for implementing a data hot loading method according to an embodiment of the present disclosure.
As shown in fig. 5, the electronic apparatus 500 includes a Central Processing Unit (CPU)501 that can execute various processes in the embodiment shown in fig. 1 described above according to a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The CPU501, ROM502, and RAM503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
The following components are connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, and the like; an output portion 507 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The driver 510 is also connected to the I/O interface 505 as necessary. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as necessary, so that a computer program read out therefrom is mounted into the storage section 508 as necessary.
In particular, according to embodiments of the present disclosure, the method described above with reference to fig. 1 may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a medium readable thereby, the computer program comprising program code for performing the method illustrated in FIG. 1. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 509, and/or installed from the removable medium 511.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowcharts or block diagrams may represent a module, a program segment, or a portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present disclosure may be implemented by software or hardware. The units or modules described may also be provided in a processor, and the names of the units or modules do not in some cases constitute a limitation of the units or modules themselves.
As another aspect, the present disclosure also provides a computer-readable storage medium, which may be the computer-readable storage medium included in the apparatus in the above-described embodiment; or it may be a separate computer readable storage medium not incorporated into the device. The computer readable storage medium stores one or more programs for use by one or more processors in performing the methods described in the present disclosure.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the inventive concept. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (14)

1. A data hot loading method is characterized by comprising the following steps:
responding to a data hot loading event, putting data to be loaded into a background cache region, and executing a first preset operation on a first reference count corresponding to the background cache region; wherein the first reference count is set to an initial value prior to performing the first preset operation;
performing a second preset operation on a second reference count corresponding to the foreground cache region; wherein the first preset operation and the second preset operation are complementary operations;
replacing the foreground cache region with the background cache region, and replacing the second reference count with the first reference count.
2. The method of claim 1, further comprising:
releasing the foreground cache in response to an event that the second reference count is updated to an initial value.
3. The method according to claim 1 or 2, wherein the first preset operation is an operation of adding a preset threshold value, and the second preset operation is an operation of subtracting the preset threshold value.
4. The method of claim 1 or 2, further comprising:
performing the first preset operation on the second reference count in response to a read start operation;
and returning the address of the foreground cache region.
5. The method of claim 1 or 2, further comprising:
in response to a read end operation, performing the second preset operation on the second reference count.
6. The method of claim 1 or 2, wherein replacing the foreground buffer with the background buffer comprises:
and replacing the foreground cache region with the background cache region by utilizing atomic operation.
7. A data hot-loading apparatus, comprising:
the first response module is configured to respond to a data hot loading event, place data to be loaded into a background cache region, and execute a first preset operation on a first reference count corresponding to the background cache region; wherein the first reference count is set to an initial value prior to performing the first preset operation;
the first operation module is configured to perform a second preset operation on a second reference count corresponding to the foreground cache region; wherein the first preset operation and the second preset operation are complementary operations;
a replacement module configured to replace the foreground cache area with the background cache area and replace the second reference count with the first reference count.
8. The apparatus of claim 7, further comprising:
a second response module configured to release the foreground cache in response to an event that the second reference count is updated to an initial value.
9. The apparatus according to claim 7 or 8, wherein the first preset operation is an operation of adding a preset threshold, and the second preset operation is an operation of subtracting the preset threshold.
10. The apparatus of claim 7 or 8, further comprising:
a third response module configured to perform the first preset operation on the second reference count in response to a read start operation;
a return module configured to return an address of the foreground cache.
11. The apparatus of claim 7 or 8, further comprising:
a fourth response module configured to perform the second preset operation on the second reference count in response to a read end operation.
12. The apparatus of claim 7 or 8, wherein the replacement module comprises:
a replacement submodule configured to replace the foreground cache region with the background cache region using an atomic operation.
13. An electronic device comprising a memory and a processor; wherein,
the memory is configured to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method steps of any of claims 1-6.
14. A computer-readable storage medium having stored thereon computer instructions, characterized in that the computer instructions, when executed by a processor, carry out the method steps of any of claims 1-6.
CN201811536447.5A 2018-12-14 2018-12-14 Data hot loading method and device, electronic equipment and computer readable storage medium Active CN109683984B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811536447.5A CN109683984B (en) 2018-12-14 2018-12-14 Data hot loading method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811536447.5A CN109683984B (en) 2018-12-14 2018-12-14 Data hot loading method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN109683984A CN109683984A (en) 2019-04-26
CN109683984B true CN109683984B (en) 2022-01-28

Family

ID=66187711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811536447.5A Active CN109683984B (en) 2018-12-14 2018-12-14 Data hot loading method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN109683984B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110851288B (en) * 2019-10-17 2021-08-03 腾讯科技(深圳)有限公司 Message processing method and device
CN111209504B (en) * 2020-01-06 2023-09-22 北京百度网讯科技有限公司 Method and apparatus for accessing map data
CN111723250B (en) * 2020-05-22 2024-03-08 长沙新弘软件有限公司 Chain table management method based on reference counting

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567006A (en) * 2010-12-31 2012-07-11 ***通信集团黑龙江有限公司 Application service expanding method, device and system
CN107154968A (en) * 2017-04-26 2017-09-12 深圳市优网科技有限公司 A kind of data processing method and equipment
CN108304201A (en) * 2017-09-14 2018-07-20 腾讯科技(深圳)有限公司 Object updating method, device and equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7971031B2 (en) * 2007-05-29 2011-06-28 Hewlett-Packard Development Company, L.P. Data processing system and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567006A (en) * 2010-12-31 2012-07-11 ***通信集团黑龙江有限公司 Application service expanding method, device and system
CN107154968A (en) * 2017-04-26 2017-09-12 深圳市优网科技有限公司 A kind of data processing method and equipment
CN108304201A (en) * 2017-09-14 2018-07-20 腾讯科技(深圳)有限公司 Object updating method, device and equipment

Also Published As

Publication number Publication date
CN109683984A (en) 2019-04-26

Similar Documents

Publication Publication Date Title
US10621029B2 (en) Restoring an application from a system dump file
CN109683984B (en) Data hot loading method and device, electronic equipment and computer readable storage medium
US8544022B2 (en) Transactional memory preemption mechanism
US9652248B2 (en) Load queue entry reuse for operand store compare history table update
US9164911B2 (en) Atomic execution over accesses to multiple memory locations in a multiprocessor system
US20150070370A1 (en) Memory management techniques
CN104021043B (en) The interruption re-access method and system of batch application program
US9652492B2 (en) Out-of-order execution of strictly-ordered transactional workloads
JPS5983249A (en) Control of queue
US20220326927A1 (en) Abort installation of firmware bundles
WO2019215532A1 (en) Host aware update write
CN110609807A (en) Method, apparatus, and computer-readable storage medium for deleting snapshot data
US8868876B2 (en) Dedicated large page memory pools
US9990290B2 (en) Cache coherency verification using ordered lists
CN114756355B (en) Method and device for automatically and quickly recovering process of computer operating system
CN108733704B (en) Multi-database data processing method and device, storage medium and electronic equipment
US10754842B2 (en) Preplaying transactions that mix hot and cold data
CN111881149A (en) Large-scale concurrency solution method and system based on Java
US9594589B2 (en) Suspending transactional-memory transactions without stack corruption
CN112162832B (en) Method and device for realizing audit data storage under multi-version concurrency control
CN112559243B (en) Data snapshot method and device, electronic equipment and computer readable storage medium
CN114297299A (en) Data synchronization method, device, equipment and medium
CN117873748A (en) Transaction chain execution method, device and equipment
JP2002373082A (en) Method for processing transaction of multitask configuration and transaction processing program
CN118113288A (en) Plug-in compiling method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant