CN117687727A - Method for increasing cache and related products - Google Patents

Method for increasing cache and related products Download PDF

Info

Publication number
CN117687727A
CN117687727A CN202310848341.3A CN202310848341A CN117687727A CN 117687727 A CN117687727 A CN 117687727A CN 202310848341 A CN202310848341 A CN 202310848341A CN 117687727 A CN117687727 A CN 117687727A
Authority
CN
China
Prior art keywords
cache
buffer
round
rotation
procedure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310848341.3A
Other languages
Chinese (zh)
Inventor
蔡立峰
徐涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202310848341.3A priority Critical patent/CN117687727A/en
Publication of CN117687727A publication Critical patent/CN117687727A/en
Pending legal-status Critical Current

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The application discloses a method for increasing cache and related products. The method comprises the following steps: monitoring whether the cache is overtime or not in each cache round-robin flow in the cache round-robin flow; if the time-out of the cache is obtained in any one of the cache round rotor flows, the set number of caches is increased for cache round. Corresponding products are also disclosed. By adopting the scheme, whether the buffer is overtime or not is acquired in each buffer round-robin flow in the buffer round-robin flow is monitored in real time, if the buffer is overtime, the buffer with the set quantity is added for buffer round-robin, so that buffer round-robin blocking can be avoided, and the reliability of buffer round-robin is improved.

Description

Method for increasing cache and related products
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method for dynamically increasing buffer (buffer) and related products.
Background
BlastBufferQueue (BBQ) maintains a Producer-Consumer model, and images rendered on the application (application program, APP) side (completed by the Producer) are stored in a cache, and then the attributes of the Buffer and the layer are submitted to SurfaceFlinger (SF) for composition and display (actual Consumer) by Transaction (Transaction). Wherein, the APP side finishes rendering and includes the following steps: enqueue (queue), dequeue (dequeue), acquire (acquire), release (release), etc. The above steps are needed to be cached in the execution process, namely the caching rotation process, and the rotation process can be completed smoothly by at least 3 caches, if a certain link is abnormally lost, the problems of locking of the whole caching rotation, freezing of a terminal (such as a mobile phone), unresponsiveness of application (Application Not Response, ANR) and the like are finally caused.
Disclosure of Invention
The application provides a method for increasing cache and related products, so as to avoid cache round-robin blocking and improve the reliability of cache round-robin.
In a first aspect, a method for increasing a cache is provided, where the method is applied to an electronic device, and the method includes: monitoring whether the cache is overtime or not in each cache round-robin flow in the cache round-robin flow; and if the cache time-out is acquired in any one of the cache round rotor flows, increasing the set number of caches for cache round. In the aspect, whether the cache is overtime or not is obtained in each cache round-robin flow in the cache round-robin flow is monitored in real time, if the cache is overtime, the set number of caches are increased for the cache round-robin, the cache round-robin blocking can be avoided, and the reliability of the cache round-robin is improved.
In one possible implementation, the method further comprises: and restoring the buffer rotation flow based on the increased set number of buffers. In this implementation, by increasing the set number of caches, cache rotation blocking can be avoided, and thus the cache rotation flow can be smoothly restored.
In another possible implementation, the method further includes: and the caching rotation flow is finished, and the set number of caches are released. In the implementation, the buffer rotation flow is finished, the set number of buffers are released, and the utilization rate of buffer resources can be improved.
In yet another possible implementation, the method further includes: determining the increased number of said caches. Illustratively, increasing or decreasing the buffer in the BBQ may be embodied by controlling mmaxaquiredbuffercount.
In yet another possible implementation, the cache rotation procedure includes a dequeue buffer procedure, and the monitoring whether the cache is overtime is obtained from each cache rotation procedure in the cache rotation procedure includes: and performing timeout monitoring in the dequeue buffer process, and monitoring whether the dequeue buffer process acquires the cache to timeout or not.
In yet another possible implementation, the buffer rotation process includes an enqueue buffer queue buffer process, and the monitoring whether the buffer is overtime is obtained from each buffer rotation process in the buffer rotation process includes: for setting application, monitoring whether the cache obtained in the queue buffer process is overtime.
In yet another possible implementation, the method further includes: and adding the identification of the setting application. In this implementation, in view of the normative and differences of a large number of APP rendering modes, in order to avoid the unknown risk of the scheme for adding the cache in the application, an identifier (white list) for setting the application may be added to the BBQ to identify/control the application in which the scheme takes effect, and the application that can be covered by the scheme is controlled through the white list.
In a second aspect, an apparatus for adding a buffer is provided, where the method in the first aspect may be implemented. The above method may be implemented by software, hardware, or by hardware executing corresponding software.
In one possible implementation, the apparatus includes: the monitoring unit is used for monitoring whether the obtained cache in each cache round-robin flow in the cache round-robin flow is overtime or not; and the adding unit is used for adding a set number of caches for cache rotation if the monitoring unit monitors that the cache time-out is acquired from any one of the cache rotation flows.
Optionally, the apparatus further comprises: and the recovery unit is used for recovering the buffer rotation flow based on the increased set number of buffers.
Optionally, the apparatus further comprises: and the releasing unit is used for releasing the set number of caches after the caching rotation flow is finished.
Optionally, the apparatus further comprises: and the determining unit is used for determining the increased number of the caches.
Optionally, the buffer rotation process includes a dequeue buffer process, and the monitoring unit is configured to monitor timeout in the dequeue buffer process, and monitor whether the dequeue buffer process obtains the buffer timeout.
Optionally, the buffer rotation process includes an enqueue buffer process, and the monitoring unit is configured to monitor, for a set application, whether the obtaining buffer in the buffer process is overtime.
Optionally, the apparatus further comprises: and the adding unit is used for adding the identification of the setting application.
In another possible implementation, the apparatus includes a processor coupled to a memory; the processor is configured to support the apparatus to perform the corresponding functions of the above-described method. The memory is used to couple with the processor, which holds the necessary programs (instructions) and/or data for the device. Optionally, the apparatus may further comprise an interface for supporting interaction between the apparatus and other apparatuses. Alternatively, the memory may be located inside the device or outside the device. In the alternative, the memory may be integral to the processor.
In a third aspect, an electronic device is provided, comprising an input device, an output device, a memory, and a processor; wherein the processor is coupled with the input device, the output device and the memory, and is used for executing a computer program or instructions to control the input device to receive information and control the output device to send information; the processor is also adapted to implement the above-described methods by logic circuits or executing code instructions when the processor executes the computer program or instructions. The input device and the output device may be input/output interfaces, and are used for receiving signals from other devices outside the device and transmitting the signals to the processor or sending the signals from the processor to the other devices outside the device.
Wherein the processor is configured to perform the steps of:
monitoring whether the cache is overtime or not in each cache round-robin flow in the cache round-robin flow; and if the cache time-out is acquired in any one of the cache round rotor flows, increasing the set number of caches for cache round.
In one possible implementation, the processor is further configured to perform the steps of: and restoring the buffer rotation flow based on the increased set number of buffers.
In another possible implementation, the processor is further configured to perform the steps of: and the caching rotation flow is finished, and the set number of caches are released.
In yet another possible implementation, the processor is further configured to perform the steps of: determining the increased number of said caches.
In yet another possible implementation, the cache rotation procedure includes a dequeue buffer procedure, and the processor executes the step of obtaining whether the cache is overtime in each cache rotation procedure in the monitoring cache rotation procedure, including: and performing timeout monitoring in the dequeue buffer process, and monitoring whether the dequeue buffer process acquires the cache to timeout or not.
In yet another possible implementation, the buffer rotation procedure includes an enqueue buffer queue buffer procedure, and the processor executes the step of obtaining whether the buffer is overtime in each buffer rotation procedure in the monitoring buffer rotation procedure, including: for setting application, monitoring whether the cache obtained in the queue buffer process is overtime.
In yet another possible implementation, the processor is configured to perform the steps of: and adding the identification of the setting application.
In a fourth aspect, a computer readable storage medium is provided, in which a computer program or instructions is stored which, when executed by a computer, implement any one of the above-mentioned first aspects or the method of the first aspect.
In a fifth aspect, there is provided a computer program product comprising instructions which, when run on an apparatus, cause the apparatus to perform any one of the above-described first aspects or the first aspect to carry out the method.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the description of the embodiments will be briefly introduced below. It is apparent that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a schematic diagram of a BlastBufferQueue producer-consumer model provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of buffer rotation in an embodiment of the present application;
FIG. 3 is a flowchart illustrating a method for adding a cache according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an apparatus for adding cache according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below with reference to the accompanying drawings in the embodiments of the present application.
In order that the above-recited objects, features and advantages of the present application will become more readily apparent, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments that are illustrated in the appended drawings.
Reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic may be included in at least one implementation of the present application. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
Hereinafter, the terms "first," "second," and the like, if used, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", etc. may explicitly or implicitly include one or more such feature. In the description of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
Related terms related to embodiments of the present application will be described first:
BlastBufferQueue:
FIG. 1 is a schematic diagram of a BlastBufferQueue producer-consumer model according to an embodiment of the present application. In an Android (Android) 12 system, initialization of the BufferQueue related components is completed in BlastBufferqueue. The BufferQueue specifically comprises the following components: producer, bufferQueueCore and Consumer (virtual). Wherein, bufferQueue core is the actual implementation of BufferQueue. The BufferQueue core is the core of the BufferQueue, and defines some basic attributes of the Buffer queue. Both the Producer and the Consumer hold the same BufferQueueCore object, so both the Producer and the Consumer can access the properties defined by BufferQueue. In the BufferQueue, the BufferSlot is used for storing the GraphicBuffer, the array is used for storing a series of bufferslots, the default size of the array is 64, and one BufferSlot binds one GraphicBuffer.
The image rendered by the APP side (completed by the producer) is stored in the buffer, and then the buffer and the attribute of the layer are submitted to the SF for synthesis and display (actual consumer) through the Transaction. Wherein, the APP side finishes rendering and includes the following steps: enqueue, dequeue, acquire, release, etc. Therefore, the whole producer-consumer model is at the APP end, and the dequeuing, enqueuing, obtaining and other operations of the Graphic Buffer are completed at the APP end, which indicates that the producer model is changed from remote communication to local communication. The change brought is that the APP end needs to submit the attribute of the Buffer and the layer to the SF end through Transaction.
The above steps are needed to be cached in the execution process, namely the caching rotation process, and the rotation process can be completed smoothly by at least 3 caches, if a certain link is abnormally lost, the problems of locking of the whole caching rotation, freezing of a terminal (such as a mobile phone), ANR and the like are finally caused.
The Buffer has several states as shown in table 1 below:
TABLE 1
In the above state, the acquire buffer (acquire buffer) and the release buffer (release buffer) are often the causes of buffer rotation blocking in the BBQ. The acquireBuffer process submits the buffer from the APP side to the SF side, and the state of the buffer after the process is correspondingly marked as an active state; the releaseBuffer returns the buffer to the APP side after the SF consumption is completed, and the buffer state is switched from the active state to the free state after the process.
Dequeue buffer (dequeue buffer) and enqueue buffer (queue buffer) timeouts are often a direct manifestation of buffer rotation stuck in the BBQ. The dequeue buffer applies for the buffer in the free state to render, and when the dequeue buffer fails and the buffer is not continuously applied for, the producer cannot normally produce the image, and the display content of the terminal is not refreshed; similar actions for applying the free-state buffer are also available in the queue buffer (the queue is a branch of the queue, and the buffer and the layer submitted by the queue are processed again), and when the buffer in the free state is not continuously applied, the buffer in the BBQ is blocked.
As shown in fig. 2, a flow chart of buffer rotation in an example of the embodiment of the present application includes the following steps:
s201, calling dequeue buffer by producer to request a buffer from BBQ. After the BBQ receives the request, a buffer in a free state is searched in the buffer slot queue, and an index (index) of the buffer is returned.
And S202, calling the queue buffer to return the buffer to the BBQ after the producer fills the data. The BBQ packages the specified buffer into a BufferItem object, sends the BufferItem object into mQueue and notifies Consumer to consume.
S203, a Consumer calls an acquireBuffer to acquire a buffer from the BBQ.
And S204, the APP side submits the buffer and the attribute of the layer to the SF for synthesis and display through the Transaction.
And S205, on the SF side, a latch buffer is obtained from the buffer TX to consume the target buffer.
And S206, in the SF side and PostComposition, after the synthesis is completed, a callback notification is sent to release the corresponding buffer.
S207, calling a releaseBuffer to return the buffer to the BBQ, and continuing to participate in rotation after the buffer.
As can be seen from the above flow, the buffer rotation in BBQ currently encounters more problems, which are different, but eventually appear when the deuebuffer or the deuebuffer is blocked due to time-out, for example, the buffer is not sent after the SF consumes the buffer (a mechanism for inter-process communication), or the buffer is sent but is not executed continuously by the SF, or the buffer is successfully executed but still fails to trigger actions such as releasebuffer (buffer is lost on SF side), or for example, the buffer is not normally sent to the SF by APP side to consume and cause stacking (buffer is lost on APP side). Both of these two types of situations will result in limited buffers being stacked in active state, and no buffers in free state are available for the producer.
In order to solve the problem of cache round robin blocking possibly occurring in the above-mentioned cache round robin flow, the present application provides a scheme for adding cache, whether the cache is overtime is obtained in each cache round robin flow in the real-time monitoring cache round robin flow, if the cache is overtime, the cache of increasing the set number is used for cache round robin, so that the cache round robin blocking can be avoided, and the reliability of cache round robin is improved.
Fig. 3 is a flowchart illustrating a method for adding a cache according to an embodiment of the present application. The method may be applied to electronic devices, such as cell phones, tablets, etc., for example. The method may comprise the steps of:
s301, monitoring whether the acquired cache in each cache round-robin flow in the cache round-robin flow is overtime? If yes, go to step S302; otherwise, continuing monitoring.
In this embodiment, one cache round-robin flow includes a plurality of cache round-robin flows. Illustratively, as shown in FIG. 2, the APP side utilizes the BBQ model to complete rendering, including the steps of: enqueue, dequeue, acquire, release, etc. Correspondingly, each step relates to buffering, and the method comprises the following buffering wheel rotor flow (or buffering acquisition interface): dequeueBuffer, queueBuffer, acquireBuffer and releaseBuffer, etc.
In each cache round rotor flow, caching is requested from the BBQ. For example, in FIG. 2, the Producer calls dequeue buffer to request a block of buffer from the BBQ. After the BBQ receives the request, searching a buffer in a free state in a buffer slot queue; after the Producer fills the data, calling a queue buffer to return the buffer to the BBQ; consumer calls acquireBuffer to obtain a buffer from the BBQ.
However, if the above-mentioned cache rotation blocking phenomenon occurs, at least one cache rotation flow may not acquire the cache within a specified time, which may cause the cache rotation flow to acquire the cache overtime.
In this embodiment, whether the acquisition buffer in each buffer round-robin flow is overtime or not may be monitored.
In a specific implementation, as described above, dequeue buffer and queue buffer timeout are often direct manifestations of buffer lock-up in BBQ. Thus:
if the buffer rotation flow includes the dequeue buffer flow, timeout monitoring can be performed in the dequeue buffer flow, so as to monitor whether the dequeue buffer flow obtains the buffer timeout.
If the buffer rotation flow includes a queue buffer flow, whether the acquisition buffer in the queue buffer flow is overtime or not can be monitored.
Further, in order to avoid the unknown risk of the scheme for adding the cache in the application, in consideration of the standardization and the diversity of a large number of APP rendering modes, an identifier (white list) for setting the application can be added in the BBQ to identify/control the application in which the scheme is effective, and the application which can be covered by the scheme is controlled through the white list. Therefore, at the beginning of the implementation of the present embodiment, an identifier of the setting application, for example, a name, an index, or the like of the setting application may be added. Therefore, whether the cache is obtained in the queue buffer process is overtime or not can be monitored aiming at setting application.
S302, if the cache time-out is acquired in any one of the cache round rotor flows, the set number of caches is increased for cache round.
If the cache timeout is obtained in any one of the cache round rotor flows (e.g., the dequeue buffer flow and/or the queue buffer flow), the round trip may be blocked. Thus, increasing the set number of caches is used for cache rotation.
The set number may be pre-burned in the factory of the terminal, or may be configured, for example, the number of increased caches may be determined when a cache timeout is detected in any one of the cache round-robin flows. As described above, dequeueBuffer, queueBuffer and acquireBuffer each involve an acquisition buffer, and thus, generally, at least 3 buffers can be added.
Illustratively, increasing or decreasing the buffer in the BBQ may be embodied by controlling mmaxaquiredbuffercount.
Providing an interface increment () for increasing the caches, wherein the interface increment () can be successfully triggered when enough caches are still available in the BBQ, and the set number of caches are increased once when the caches are triggered; correspondingly, an interface release () for releasing the newly added cache is provided, and the newly added caches are released one by one when triggered.
The set number of caches may be specifically increased for the cache rotation at the following times:
and adding timeout monitoring in the flow of the dequeue buffer, and triggering and calling the increment () to newly add a buffer activation cycle in the BBQ after the timeout of the dequeue buffer obtaining buffer is monitored.
While the queue buffer waits for the free state cache, a child thread is started for the whitelist application to monitor the fetch cache timeout (repeated creation of new temporary threads should be avoided, which can be implemented by a static variable). The implementation principle is as follows: detecting whether the queue buffer is overtime or not through a signalStart wakeup wireloop cycle, and starting overtime monitoring for 2 s; when BBQ original flow in 2s is successfully applied to the cache, the monitoring is stopped through the signalEnd, and the cache adding scheme is not triggered any more; when BBQ is still blocked and cannot be applied to the buffer after timeout is carried out for 2 seconds, the buffer () is triggered and called, so that 3 buffers are added to the BBQ queue, and a signal is sent to reactivate BBQ rotation.
It should be noted that, adding the above-mentioned monitoring and adding the flow of the buffer, preventing the deadlock of the flow, preventing the stepping on the system memory, and keeping in mind that the sub-thread does not block the rendering main thread when monitoring.
Further, the process may further include the following steps (indicated by dotted lines):
s303, restoring the buffer rotation flow based on the increased set number of buffers.
Aiming at the problem that dequeue buffer/queue buffer overtime is blocked caused by different root causes, the embodiment provides the dequeue buffer/queue buffer overtime monitoring and recovering mechanism, after the fact that the acquisition buffer overtime is monitored, a set number of buffers are dynamically increased in the BBQ, the buffer rotation in the BBQ is recovered again, on one hand, the newly increased buffers can ensure that the BBQ rotation is not blocked, and on the other hand, the process of blocking before the activation is also possible, for example, the lost buffers are released in the process, and the buffers blocked on the APP side are sent to the SF side; and after the whole BBQ cycle is restarted, releasing the newly added caches at a proper time, so that the original flow is restored and the memory is not influenced. Thus, the cache rotation flow can be finally restored.
By way of example, experiments show that the round-robin flow can be restored after the dequeue buffer/queue buffer is overtime for 2s, the reliability of the round-robin buffer is improved, and the progress of the rendering main thread is ensured.
S304, finishing the buffer rotation flow, and releasing the set number of buffers.
And (3) after the buffer rotation process is finished, releasing the set number of buffers, and being applicable to the next buffer rotation process, so that the utilization rate of buffer resources can be improved.
According to the method for increasing the cache, whether the cache is overtime or not is obtained in each cache round-robin flow in the cache round-robin flow is monitored in real time, if the cache is overtime, the set number of caches are increased for the cache round-robin, the cache round-robin blocking can be avoided, and the reliability of the cache round-robin is improved.
It will be appreciated that in the various embodiments above, the methods and/or steps implemented by an electronic device may also be implemented by a component (e.g., a chip or circuit) that may be used in an electronic device.
The above description mainly describes the scheme provided in the embodiment of the present application from the point of view of the method flow. Correspondingly, the embodiment of the application also provides a device for adding the cache, and the device for adding the cache is used for realizing the method. The buffer adding device may be the electronic device in the above method embodiment, or may be a component that may be used in the electronic device. It will be appreciated that the means for adding a cache may comprise hardware structures and/or software modules for performing the functions described above. Those of skill in the art will readily appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
According to the embodiment of the application, the function modules of the device for adding the cache may be divided according to the embodiment of the method, for example, each function module may be divided corresponding to each function, or two or more functions may be integrated in one processing unit. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
Based on the same conception of the method for increasing the cache, the application also provides a device for increasing the cache as follows:
fig. 4 is a schematic structural diagram of a device for adding cache according to an embodiment of the present application. The apparatus 400 includes: a monitoring unit 401 and an adding unit 402; and may also include (shown in phantom): a restoration unit 403, a release unit 404, a determination unit 405, and an addition unit 406.
The monitoring unit 401 is configured to monitor whether the acquiring cache in each cache round-robin flow in the cache round-robin flow is overtime; and the adding unit 402 is configured to, if the monitoring unit monitors that the obtaining of the buffer timeout in any one of the buffer round rotor flows in the buffer round flows is detected, add a set number of buffers for buffer round.
In a possible implementation, the restoring unit 403 is configured to restore the cache rotation flow based on the increased set number of caches.
In another possible implementation, the releasing unit 404 is configured to end the buffering round-robin procedure and release the set number of buffers.
In a further possible implementation, the determining unit 405 is configured to determine an increased number of the caches.
In yet another possible implementation, the buffer rotation procedure includes a dequeue buffer procedure, and the monitoring unit 401 is configured to perform timeout monitoring in the dequeue buffer procedure, and monitor whether the dequeue buffer procedure obtains a buffer timeout.
In yet another possible implementation, the buffer rotation procedure includes an enqueue buffer procedure, and the monitoring unit 401 is configured to monitor, for a set application, whether the obtaining buffer in the buffer procedure is overtime.
In a further possible implementation, the adding unit 406 is configured to add an identification of the setting application.
The specific implementation of each unit described above may refer to the related description in the embodiment shown in fig. 3, and will not be repeated here.
According to the device for increasing the cache, whether the cache is overtime or not is obtained in each cache round-robin flow in the cache round-robin flow is monitored in real time, if the cache is overtime, the set number of caches are increased for the cache round-robin, the cache round-robin blocking can be avoided, and the reliability of the cache round-robin is improved.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 500 may include:
input means 501, output means 502, memory 503 and processor 504 (the number of processors 504 in the electronic device may be one or more, one processor being an example in fig. 5). In some embodiments of the present application, the input device 501, the output device 502, the memory 503, and the processor 504 may be connected by a bus or other means, where a bus connection is exemplified in fig. 5.
Wherein the processor 504 is configured to perform the steps of:
monitoring whether the cache is overtime or not in each cache round-robin flow in the cache round-robin flow; and if the cache time-out is acquired in any one of the cache round rotor flows, increasing the set number of caches for cache round.
In one possible implementation, the processor 504 is further configured to perform the following steps: and restoring the buffer rotation flow based on the increased set number of buffers.
In another possible implementation, the processor 504 is further configured to perform the following steps: and the caching rotation flow is finished, and the set number of caches are released.
In yet another possible implementation, the processor 504 is further configured to perform the steps of: determining the increased number of said caches.
In yet another possible implementation, the cache rotation procedure includes a dequeue buffer procedure, and the processor 504 executes a step of acquiring whether the cache is overtime in each cache rotation procedure in the monitoring cache rotation procedure, including: and performing timeout monitoring in the dequeue buffer process, and monitoring whether the dequeue buffer process acquires the cache to timeout or not.
In yet another possible implementation, the cache round robin procedure includes an enqueue cache queue buffer procedure, and the processor 504 executes a step of obtaining whether the cache is overtime in each cache round robin procedure in the monitoring cache round robin procedure, including: for setting application, monitoring whether the cache obtained in the queue buffer process is overtime.
In yet another possible implementation, the processor 504 is configured to perform the steps of: and adding the identification of the setting application.
According to the electronic device provided by the embodiment of the application, whether the cache is overtime or not is obtained in each cache round-robin flow in the cache round-robin flow is monitored in real time, if the cache is overtime, the set number of caches are increased for the cache round-robin, so that the cache round-robin blocking can be avoided, and the reliability of the cache round-robin is improved.
The division of the modules in the present application is schematic, and is merely a logic function division, and there may be another division manner when actually implementing the division, and in addition, each functional module in each example of the present application may be integrated in one processor, or may exist separately and physically, or two or more modules may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules.
It is to be appreciated that the processor in embodiments of the present application may be a central processing unit (central processing unit, CPU), but may also be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), field programmable gate arrays (field programmable gate array, FPGA) or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof. The general purpose processor may be a microprocessor, but in the alternative, it may be any conventional processor.
The present application also provides a computer-readable storage medium (memory) having stored therein a computer program or instructions which, when executed, implement the method in the above embodiments. The computer storage medium is a memory device in the apparatus for storing programs and data. It will be appreciated that the computer storage media herein may include both built-in storage media in the device and extended storage media supported by the device. The computer storage media provides storage space that stores the operating system of the device. Also stored in the memory space are one or more instructions, which may be one or more computer programs (including program code), adapted to be loaded and executed by the processor. The computer storage medium herein may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory; optionally, at least one computer storage medium remote from the processor may be present.
Embodiments of the present application also provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the above embodiments.
Embodiments of the present application also provide a circuit coupled to a memory, the circuit being used to perform the method shown in the above embodiments. The circuitry may include chip circuitry.
It should be noted that one or more of the above units or units may be implemented in software, hardware or a combination of both. When any of the above units or units are implemented in software, the software exists in the form of computer program instructions and is stored in a memory, a processor may be used to execute the program instructions and implement the above method flows.
In this application, the processor may be a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, and may implement or perform the methods, steps, and logic blocks disclosed herein. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the present application may be embodied directly in a hardware processor or in a combination of hardware and software modules within a processor.
When the above units or units are implemented in hardware, the hardware may be any one or any combination of a CPU, microprocessor, digital signal processing (digital signal processing, DSP) chip, micro control unit (microcontroller unit, MCU), artificial intelligence processor, ASIC, soC, FPGA, PLD, dedicated digital circuitry, hardware accelerator, or non-integrated discrete device, which may run the necessary software or be independent of the software to perform the above method flows.
Optionally, an embodiment of the present application further provides a chip system, including: at least one processor and an interface, the at least one processor being coupled with the memory through the interface, the at least one processor, when running a computer program or instructions in the memory, causes the chip system to perform the method in the method embodiments described above. Alternatively, the chip system may be formed by a chip, or may include a chip and other discrete devices, which are not specifically limited in this embodiment of the present application.
The memory in this application may also be circuitry or any other device capable of performing the function of storing program instructions and/or data. The memory is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. For example, the memory may be a nonvolatile memory such as a digital versatile disc (digital versatile disc, DVD), a hard disk (HDD), a Solid State Drive (SSD), or the like, or may be a volatile memory (RAM) such as a random-access memory (RAM).
It should be understood that in the description of the present application, unless otherwise indicated, "/" means that the associated object is an "or" relationship, e.g., a/B may represent a or B; wherein A, B may be singular or plural. Also, in the description of the present application, unless otherwise indicated, "a plurality" means two or more than two. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural. In addition, in order to clearly describe the technical solutions of the embodiments of the present application, in the embodiments of the present application, the words "first", "second", and the like are used to distinguish the same item or similar items having substantially the same function and effect. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ. Meanwhile, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion that may be readily understood.
The terms "comprising" and "having" and any variations thereof, as referred to in the following description of the present application, are intended to cover non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed but may optionally include other steps or elements not listed or inherent to such process, method, article, or apparatus.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented using a software program, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.).
Although the present application has been described herein in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a review of the figures, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
It will be appreciated that the various numerical numbers referred to in the embodiments of the present application are merely for ease of description and are not intended to limit the scope of the embodiments of the present application. The sequence number of each process does not mean the sequence of the execution sequence, and the execution sequence of each process should be determined according to the function and the internal logic.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
The components in the device of the embodiment of the application can be combined, divided and deleted according to actual needs. Those skilled in the art can combine or combine the features of the different embodiments described in this specification and the different embodiments.
In this application, where there is no logical conflict, examples may be referred to each other, for example, methods and/or terms between method embodiments may be referred to each other, for example, functions and/or terms between apparatus examples and method examples may be referred to each other.
The above-disclosed preferred embodiments of the present application are provided only as an aid to the elucidation of the present application. The preferred embodiments are not exhaustive or to limit the application to the precise form disclosed. Obviously, many modifications and variations are possible in light of the teachings of the embodiments of the present application. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best understand and utilize the invention. This application is to be limited only by the claims and the full scope and equivalents thereof.

Claims (18)

1. A method of increasing cache, the method being applied to an electronic device, the method comprising:
monitoring whether the cache is overtime or not in each cache round-robin flow in the cache round-robin flow;
if the time-out of the cache is obtained in any one of the cache round rotor flows, the set number of caches is increased for cache round.
2. The method according to claim 1, wherein the method further comprises:
and restoring the buffer rotation flow based on the increased set number of buffers.
3. The method according to claim 2, wherein the method further comprises:
and the caching rotation flow is finished, and the set number of caches are released.
4.A method according to any one of claims 1-3, characterized in that the method further comprises:
determining the increased number of said caches.
5. The method according to any one of claims 1-4, wherein the cache rotation procedure includes an dequeue buffer procedure, and the monitoring whether the obtaining of the cache in each cache rotation procedure in the cache rotation procedure is overtime includes:
and performing timeout monitoring in the dequeue buffer process, and monitoring whether the dequeue buffer process acquires the cache to timeout or not.
6. The method according to any one of claims 1-5, wherein the cache rotation procedure includes an enqueue cache queue buffer procedure, and the monitoring whether the obtaining of the cache in each cache rotation procedure in the cache rotation procedure is overtime includes:
for setting application, monitoring whether the cache obtained in the queue buffer process is overtime.
7. The method of claim 6, wherein the method further comprises:
and adding the identification of the setting application.
8. An apparatus for adding cache, the apparatus comprising:
the monitoring unit is used for monitoring whether the obtained cache in each cache round-robin flow in the cache round-robin flow is overtime or not;
and the adding unit is used for adding a set number of caches for caching rotation if the monitoring unit monitors that the cache time-out is acquired in any one of the caching rotation flows.
9. The apparatus of claim 8, wherein the apparatus further comprises:
and the recovery unit is used for recovering the buffer rotation flow based on the increased set number of buffers.
10. The apparatus of claim 9, wherein the apparatus further comprises:
and the releasing unit is used for releasing the set number of caches after the caching rotation flow is finished.
11. The apparatus according to any one of claims 8-10, wherein the apparatus further comprises:
and the determining unit is used for determining the increased number of the caches.
12. The apparatus according to any one of claims 8-11, wherein the cache rotation procedure includes a dequeue buffer procedure, and the monitoring unit is configured to perform timeout monitoring in the dequeue buffer procedure, and monitor whether the dequeue buffer procedure obtains a cache timeout.
13. The apparatus according to any one of claims 8-12, wherein the cache rotation procedure includes an enqueue cache queue buffer procedure, and the monitoring unit is configured to monitor, for a set application, whether obtaining the cache in the queue buffer procedure is timeout.
14. The apparatus of claim 13, wherein the apparatus further comprises:
and the adding unit is used for adding the identification of the setting application.
15. An electronic device comprising an input device, an output device, a memory, and a processor; wherein the processor is configured to perform the method of any of claims 1-7.
16. A chip for performing the method of any one of claims 1-7.
17. A chip module comprising an input-output component and a chip for performing the method of any of claims 1-7.
18. A computer readable storage medium, characterized in that the storage medium has stored therein a computer program or instructions which, when executed by a dynamic increase caching device, implement the method of any one of claims 1-7.
CN202310848341.3A 2023-07-11 2023-07-11 Method for increasing cache and related products Pending CN117687727A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310848341.3A CN117687727A (en) 2023-07-11 2023-07-11 Method for increasing cache and related products

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310848341.3A CN117687727A (en) 2023-07-11 2023-07-11 Method for increasing cache and related products

Publications (1)

Publication Number Publication Date
CN117687727A true CN117687727A (en) 2024-03-12

Family

ID=90135939

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310848341.3A Pending CN117687727A (en) 2023-07-11 2023-07-11 Method for increasing cache and related products

Country Status (1)

Country Link
CN (1) CN117687727A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010099718A1 (en) * 2009-03-03 2010-09-10 华为技术有限公司 Method and equipment for controlling data tranmission, and system thereof
CN105608115A (en) * 2015-12-11 2016-05-25 北京奇虎科技有限公司 Data acquisition method and apparatus
CN109992347A (en) * 2019-04-10 2019-07-09 Oppo广东移动通信有限公司 Interface display method, device, terminal and storage medium
CN111367672A (en) * 2020-03-05 2020-07-03 北京奇艺世纪科技有限公司 Data caching method and device, electronic equipment and computer storage medium
US10743036B1 (en) * 2018-05-30 2020-08-11 Amazon Technologies, Inc. Automatically augmenting user resources dedicated to serving content to a content delivery network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010099718A1 (en) * 2009-03-03 2010-09-10 华为技术有限公司 Method and equipment for controlling data tranmission, and system thereof
CN105608115A (en) * 2015-12-11 2016-05-25 北京奇虎科技有限公司 Data acquisition method and apparatus
US10743036B1 (en) * 2018-05-30 2020-08-11 Amazon Technologies, Inc. Automatically augmenting user resources dedicated to serving content to a content delivery network
CN109992347A (en) * 2019-04-10 2019-07-09 Oppo广东移动通信有限公司 Interface display method, device, terminal and storage medium
CN111367672A (en) * 2020-03-05 2020-07-03 北京奇艺世纪科技有限公司 Data caching method and device, electronic equipment and computer storage medium

Similar Documents

Publication Publication Date Title
US10706496B2 (en) Function callback mechanism between a Central Processing Unit (CPU) and an auxiliary processor
US9411646B2 (en) Booting secondary processors in multicore system using kernel images stored in private memory segments
KR101392109B1 (en) Providing state storage in a processor for system management mode
US9384148B2 (en) Detection of unauthorized memory modification and access using transactional memory
US10802875B2 (en) Multithread framework for use in pre-boot environment of a system-on-chip
TWI460659B (en) Lock windows for reducing contention
FR2816730A1 (en) Method of securing deterministic real time execution of multitask applications by calculating function of acquired parameter and comparing to set threshold value
WO2022121866A1 (en) Acceleration card-based service running method, apparatus, electronic device, and computer-readable storage medium
US7962926B2 (en) Method, system, and program storage device for generating a retry message when a thread in a real-time application is unavailable to process a request to utilize the real-time application
US9582340B2 (en) File lock
CN112231238A (en) Reducing memory commit overhead using memory compression
CN117687727A (en) Method for increasing cache and related products
US9081630B2 (en) Hardware-implemented semaphore for resource access based on presence of a memory buffer in a memory pool
US20200242032A1 (en) Cache and method for managing cache
US8677028B2 (en) Interrupt-based command processing
CN112805978A (en) Enhanced anchor protocol for event stream processing
CN111767153B (en) Resource access method and device and electronic equipment
US12014203B2 (en) Communications across privilege domains within a central processing unit core
US12039363B2 (en) Synchronizing concurrent tasks using interrupt deferral instructions
CN113722078A (en) High-concurrency database access method, system and equipment based on thread pool
CN116225632A (en) Thread scheduling method, device and related apparatus
CN118093083A (en) Page processing method, page processing device, computer equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination