CN115756998B - Cache data re-fetching mark verification method, device and system - Google Patents

Cache data re-fetching mark verification method, device and system Download PDF

Info

Publication number
CN115756998B
CN115756998B CN202310010875.9A CN202310010875A CN115756998B CN 115756998 B CN115756998 B CN 115756998B CN 202310010875 A CN202310010875 A CN 202310010875A CN 115756998 B CN115756998 B CN 115756998B
Authority
CN
China
Prior art keywords
data
cache
downstream
request
read
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310010875.9A
Other languages
Chinese (zh)
Other versions
CN115756998A (en
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Moore Threads Technology Co Ltd
Original Assignee
Moore Threads Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Moore Threads Technology Co Ltd filed Critical Moore Threads Technology Co Ltd
Priority to CN202310010875.9A priority Critical patent/CN115756998B/en
Publication of CN115756998A publication Critical patent/CN115756998A/en
Application granted granted Critical
Publication of CN115756998B publication Critical patent/CN115756998B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The application provides a cache data re-fetching mark verification method, a device and a system, wherein the method comprises the following steps: obtaining read-back data returned by the cache based on a read request for reading target cache data with a re-fetching mark, and obtaining cache data corresponding to the read request from a cache model as expected data; and checking the read-back data according to the expected data to determine whether the setting of the re-fetching mark is successful, wherein the re-fetching mark is a mark set for target cache data corresponding to downstream data to be re-fetched updated by a downstream behavior level model in a cache in response to a re-fetching mark request sent by the downstream behavior level model, and the cache data corresponding to the target cache data stored in the cache model is the downstream data to be re-fetched.

Description

Cache data re-fetching mark verification method, device and system
Technical Field
The present application relates to the field of cache verification technologies, and in particular, to the field of graphics processor technologies, and in particular, to a cache data refetching mark verification method, apparatus, and system.
Background
The cache can correctly execute the write request to write data and read data which is in accordance with the expectation according to the address, and is the key for correctly storing and reading the data and ensuring normal operation of the processor.
In some scenarios, the cache needs to support some functional characteristics (for example, a cache data re-fetching marking function), and how to verify the cache functional characteristics becomes an urgent problem to be solved.
Disclosure of Invention
An object of the present application is to provide a cache data refetching mark verification method, which implements verification of a refetching mark of cache data. Another object of the present application is to provide a cache data refetching mark verification apparatus. It is yet another object of the present application to provide a cache verification system. It is yet another object of the present application to provide a computer device. It is a further object of the present application to provide a readable medium. It is a further object of the present application to provide a computer program product.
In order to achieve the above object, one aspect of the present application discloses a method for verifying cache data refetching marks, including:
obtaining read-back data returned by the cache based on a read request for reading target cache data with a re-fetching mark, and obtaining cache data corresponding to the read request from a cache model as expected data;
and checking the read-back data according to the expected data to determine whether the setting of the refetch mark is successful, wherein the refetch mark is a mark set for target cache data corresponding to downstream data to be refetched updated by a downstream behavior level model in a cache in response to a refetch mark request sent by the downstream behavior level model, the refetch mark is used for indicating that the downstream data corresponding to the read request acquired from the downstream behavior level model is determined as read-back data when the read request for the target cache data is received by the cache, and the cache data corresponding to the target cache data stored in the cache model is the downstream data to be refetched.
Preferably, the re-fetching marking request is obtained by modifying the downstream data to be re-fetched into a default value for the downstream behavior-level model to update the downstream data and according to the updated downstream data to be re-fetched.
Preferably, the replay marking request is obtained by deleting downstream data to be replayed for the downstream behavior-level model to update the downstream data to be replayed according to the updated downstream data to be replayed.
Preferably, a data storage address corresponding to the cache data is stored in the cache model;
the method further comprises the following steps of obtaining cache data corresponding to the read request from a cache model as expected data, and before:
determining a target data storage address of cache data in the cache corresponding to downstream data to be re-fetched according to the re-fetching mark request;
and modifying the cache data corresponding to the target data storage address in the cache model into a default value.
Preferably, the downstream behavior-level model deletes the downstream data to be fetched again to update the downstream data to be fetched again;
the modifying the cache data corresponding to the target data storage address in the cache model to a default value specifically includes:
and deleting the cache data corresponding to the target data storage address in the cache model.
Preferably, the verifying the read-back data according to the expected data to determine whether the setting of the refetching flag is successful specifically includes:
comparing the read-back data with the expected data;
if the two marks are consistent, the setting of the re-fetching mark is successful;
if not, the setting of the re-fetching mark fails.
Preferably, the replay marking request is obtained by the downstream behavior-level model receiving a downstream data modification request transmitted by an external host, determining downstream data to be replayed according to the downstream data modification request, updating the downstream data to be replayed, and according to the updated downstream data to be replayed.
Preferably, before obtaining the read-back data returned by the cache based on the read request for reading the cache data with the refetch tag, the method further comprises:
parameterizing and extracting the verification request input into the cache based on the parameter configuration of the cache to obtain verification parameters;
obtaining a universal verification request according to the verification parameters and a preset data structure;
and determining whether the verification request is a read request for reading the cache data with the refetch mark according to the generalized verification request.
Preferably, the method further includes, before obtaining the cache data corresponding to the read-back data from the cache model as the expected data:
storing the read request to a request queue;
the obtaining the cache data corresponding to the read request from the cache model as expected data includes:
acquiring a read request corresponding to the read-back data from the request queue according to the read-back data returned by the cache;
and obtaining cache data corresponding to the read-back data from a cache model as expected data according to the read request.
Preferably, before checking the read-back data according to the expected data, further comprising:
storing the expected data to an expected data queue;
and acquiring expected data corresponding to the read request from the expected data queue.
The application also discloses a cache data re-fetching mark verification device, which comprises a downstream behavior level model, a cache model and a control module;
the downstream behavior level model is configured to update downstream data to be re-fetched to form a re-fetch tag request, send the re-fetch tag request to a cache so that the cache responds to the re-fetch tag request sent by the downstream behavior level model, and set a re-fetch tag for target cache data in the cache corresponding to the downstream data to be re-fetched, which is updated by the downstream behavior level model, where the re-fetch tag is used to indicate that when the cache receives a read request for the target cache data, the downstream data corresponding to the read request, which is obtained from the downstream behavior level model, is determined as read-back data;
the cache model is used for storing the cache data corresponding to the target cache data as the downstream data to be fetched again;
the control module is used for acquiring read-back data returned by the cache based on a read request for reading target cache data with a refetching mark, acquiring cache data corresponding to the read request from a cache model as expected data, and checking the read-back data according to the expected data to determine whether the refetching mark is successfully set.
The application also discloses a cache verification system which comprises a cache and the cache data re-fetching mark verification device.
The application also discloses a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method as described above when executing the program.
The present application also discloses a computer readable medium having stored thereon a computer program,
which program, when executed by a processor, carries out the method as described above.
The cache data re-fetching mark verification method obtains read-back data returned by a cache based on a read request for reading target cache data with a re-fetching mark, obtains cache data corresponding to the read request from a cache model as expected data, and verifies the read-back data according to the expected data to determine whether the re-fetching mark is set successfully. The method comprises the steps that a cache responds to a re-fetching mark request sent by a downstream behavior level model, and sets target cache data corresponding to downstream data to be re-fetched updated by the downstream behavior level model in the cache, wherein the re-fetching mark is used for indicating that when a read request for the target cache data is received by the cache, the downstream data corresponding to the read request and acquired from the downstream behavior level model is determined as read-back data, and the cache data corresponding to the target cache data and stored in the cache model is the downstream data to be re-fetched. Therefore, the downstream data to be re-fetched is updated through the downstream behavior level model capable of simulating the functions of the downstream module of the cache, the re-fetching mark request is formed and sent to the cache, the cache can determine the target cache data corresponding to the updated downstream data according to the received re-fetching mark request, and the re-fetching mark is set for the target cache data. Furthermore, in order to verify whether the set of the refetch flag of the target cache data is successful, a read request for obtaining the target cache data with the refetch flag needs to be sent to the cache, so that the cache obtains the updated downstream data from the downstream behavior-level model in real time after determining that the target cache data has the refetch flag. Therefore, when the cache successfully sets the refetch mark, the cache takes the read-back data returned by the read request for obtaining the target cache data with the refetch mark as the updated downstream data in the downstream behavior level model. When the cache fails to set the refetch flag, the cache still returns the target cache data stored in the cache. Therefore, after the target cache data is re-marked, the read-back data returned by the cache based on the read request and the expected data of the read request can be compared to determine whether the re-marking of the target cache data by the cache is successfully set, so that the re-marking function of the cache is verified, and whether the re-marking function of the cache normally operates is determined.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or related technologies of the present application, the drawings needed to be used in the description of the embodiments or related technologies are briefly introduced below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a block diagram illustrating a particular embodiment of a graphics processor cache in the related art;
FIG. 2 is a flow chart illustrating a specific embodiment of a cache data refetch tag validation method according to the present application;
fig. 3 is a flowchart illustrating a specific embodiment S100 of the verification method for cache data refetching marks according to the present application;
fig. 4 is a flowchart illustrating a specific embodiment S300 of the verification method for cache data refetching marks according to the present application;
fig. 5 is a flowchart of a cache data refetching mark verification method according to a specific embodiment S400 of the present application;
fig. 6 is a flowchart illustrating obtaining expected data according to a cache data re-fetching mark verification method in the present application in a specific embodiment S300;
fig. 7 is a flowchart illustrating a cache data refetching mark verification method according to a specific embodiment S330 of the present application;
fig. 8 is a flowchart illustrating a specific embodiment S500 of the verification method for cache data refetching marks according to the present application;
FIG. 9 is a block diagram illustrating an exemplary embodiment of a cache data retrieval tag verification apparatus according to the present application;
FIG. 10 is a block diagram illustrating a parameterized block in an exemplary embodiment of the present invention;
FIG. 11 is a diagram illustrating a structure of a request queue included in an exemplary embodiment of a cache data retrieval tag validation apparatus according to the present application;
FIG. 12 is a block diagram illustrating an exemplary embodiment of a cache data refetching tag validation apparatus according to the present application including an expected data queue;
FIG. 13 shows a schematic block diagram of a computer device suitable for use to implement embodiments of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The cache is used as a data temporary storage module for data interaction between the processor and an external memory, and plays a crucial role in improving the data throughput rate of the processor, so that the cache is largely used in the processor. Illustratively, a Graphics Processing Unit (GPU) has the characteristics of high bandwidth and high parallelization, and the cache plays a crucial role in improving the data throughput rate of the GPU. For convenience of understanding, the following description uses the cache of the graphics processor as an example, and it should be understood that the present disclosure does not limit application scenarios of the cache data re-fetching mark verification method, apparatus, and system.
The present disclosure addresses a cache data refetching mark validation requirement. For example, when the downstream data in the downstream module of the cache changes, the cache sets a refetch flag for the cache data corresponding to the downstream data included in the downstream module, and when the cache receives a read request for the cache data with the refetch flag, the cache acquires the changed downstream data from the downstream module again and returns the changed downstream data as read-back data. Currently, there is no verification scheme for the refetched marker of cached data.
In the related art, as shown in fig. 1, a cache of a graphics processor includes a cache region for storing data downstream of the cache and an input interface for receiving an external request, the cache region includes memory regions such as cache line 0, cache line 1, cache line 2 … cache line n, and the like, and the input interface includes interfaces such as input interface 1, input interface 2 … input interface n, and the like. After receiving a data request input from outside, if the cache data stored in the cache is processed or the cache data stored in the cache is obtained, the cache can directly perform data processing or return the cache data. If the downstream data stored in the downstream module of the cache needs to be processed or the downstream module data needs to be obtained, the cache of the graphics processor further transmits a data request to the downstream module of the cache according to the input data request, so that the function of processing the downstream data stored in the downstream module of the cache or obtaining the downstream data is realized.
In a graphics processor, there may be tens of cache-type data requests. For example, the cache may be provided with n ports for receiving external requests, including input port 1, input port 2 … input port n. So that it can be divided into a single port input and a multi-port input according to the number of input ports of the data request. The single-port input means that only one port sends a data read-write request to the cache; the multi-port input means that a plurality of ports send data read-write requests to the cache at the same time. The classification can be divided into read-only cache and read-write cache according to the type of the input request. The read-only cache indicates that the input request only has a read data request; the read-write cache indicates that there are both read and write requests for the incoming request. The read-write cache can be divided into sequential execution and out-of-order execution according to the request execution order. The sequential execution means that the cache is executed according to the time sequence of receiving the requests, namely the cache receives the requests A- > B successively, and the internal execution sequence of the cache is also A- > B. Out-of-order execution means that the cache is not necessarily executed according to the chronological order of the requests, that is, the cache receives the requests a- > B respectively, and the order of internal execution of the cache may be a- > B or B- > a.
The cache of the graphics processor of the disclosed embodiments may also need to support functional features that also need to be verified for correctness in their execution. These functional characteristics may include re-label validation. In general, the cache data stored in the cache is at least part of the downstream data acquired from the downstream module, so that the cache data in the cache corresponds to the identical downstream data in the downstream module. When the downstream data in the downstream module changes, the downstream module forms a refetched mark request and sends the refetched mark request to the cache, so that the cache performs refetched mark on the cache data corresponding to the changed downstream data. Therefore, when the cache receives and reads the cache data with the re-fetching mark, the cache can re-acquire the changed downstream data from the downstream module in real time. When the cache is abnormal, for example, the setting of the refetching flag of the cache data fails or the cache function is abnormal, the cache still returns the cache data in the cache. There is currently no authentication scheme for cache refetched tags.
In addition, in the related art, in the verification process of the graphic processor cache, for the verification of different cache types, test environments are respectively constructed for verification, different human resources are required to be respectively input for verification, a large amount of human resources are input for the verification of different cache types, and a large amount of repeated verification work exists, so that huge manpower and time overhead is caused, and further, the labor cost of cache verification is high, the cache verification time is long, and the efficiency is low.
According to one aspect of the application, the embodiment discloses a cache data re-fetching mark verification method. As shown in fig. 2, in this embodiment, the method includes:
s100: and obtaining the read-back data returned by the cache based on the read request for reading the target cache data with the re-fetching mark, and obtaining the cache data corresponding to the read request from the cache model as expected data. The replay mark is a mark set by a cache in response to a replay mark request sent by a downstream behavior level model and on target cache data corresponding to downstream data to be replayed updated by the downstream behavior level model in the cache, the replay mark is used for indicating that the cache determines downstream data corresponding to a read request, which is obtained from the downstream behavior level model and corresponds to the read request, as read back when the read request on the target cache data is received, and the cache data corresponding to the target cache data stored in the cache model is the downstream data to be replayed.
S200: and checking the read-back data according to the expected data to determine whether the resetting of the re-fetching mark is successful.
The cache data re-fetching mark verification method obtains readback data returned by a cache based on a read request for reading target cache data with re-fetching marks, obtains cache data corresponding to the read request from a cache model as expected data, and verifies the readback data according to the expected data to determine whether the re-fetching marks are set successfully. The replay mark is a mark set by a cache in response to a replay mark request sent by a downstream behavior level model and on target cache data corresponding to downstream data to be replayed updated by the downstream behavior level model in the cache, the replay mark is used for indicating that the cache determines downstream data corresponding to a read request, which is obtained from the downstream behavior level model and corresponds to the read request, as read back when the read request on the target cache data is received, and the cache data corresponding to the target cache data stored in the cache model is the downstream data to be replayed. Therefore, the downstream data to be re-fetched is updated through the downstream behavior level model capable of simulating the functions of the downstream module of the cache, the re-fetching mark request is formed and sent to the cache, the cache can determine the target cache data corresponding to the updated downstream data according to the received re-fetching mark request, and the re-fetching mark is set for the target cache data. Furthermore, in order to verify whether the set of the refetch flag of the target cache data is successful, a read request for obtaining the target cache data with the refetch flag needs to be sent to the cache, so that the cache obtains the updated downstream data from the downstream behavior-level model in real time after determining that the target cache data has the refetch flag. Therefore, when the cache successfully sets the refetch mark, the cache takes the read-back data returned by the read request for obtaining the target cache data with the refetch mark as the updated downstream data in the downstream behavior level model. When the cache fails to set the refetch flag, the cache still returns the target cache data stored in the cache. Therefore, after the target cache data is re-marked, the read-back data returned by the cache based on the read request and the expected data of the read request can be compared to determine whether the re-marking of the target cache data is successfully set by the cache, so that the verification of the re-marking function of the cache is realized, and whether the re-marking function of the cache normally operates is determined.
It should be noted that the downstream behavior level model in the present application may simulate a function of a downstream module of the cache, that is, the downstream behavior level model stores downstream data and a data storage address of the corresponding downstream data, and may return the corresponding downstream data to the cache based on a data request sent by the cache. In addition, in order to implement verification of the cache re-fetching mark, the downstream behavior level model of the present application may further modify the downstream data to update the downstream data stored in the downstream behavior level model to obtain the downstream data to be re-fetched, form a re-fetching mark request based on the updated downstream data to be re-fetched, and send the re-fetching mark request to the cache, so that the cache sets a re-fetching mark on target cache data corresponding to the downstream data to be re-fetched.
In addition, the cache model in the present application stores cache data and corresponding data storage addresses, where the cache data of the cache model may include the cache data in the cache, and may also include downstream data stored in the downstream behavior level model. Illustratively, the cache model at least includes cache data corresponding to the target cache data, the cache data corresponding to the target cache data may be the target cache data before the refetching marker request sent by the downstream behavior level model, and the cache data may be downstream data to be refetched, which corresponds to the target cache data but is different from the target cache data, in the downstream behavior level model after the refetching marker request sent by the downstream behavior level model. The cache model may be used to obtain corresponding cache data by searching from the cache model according to the data storage address of the cache data.
In one or more optional embodiments, a write request for writing data into the cache may be acquired, the write request input into the cache is analyzed to obtain a data storage address to be written and cache data to be written, and the cache data in the cache and a corresponding data storage address are obtained in a form of intercepting and analyzing the write request input into the cache. In other embodiments, the cache model may also be updated by directly sending a request to the cache to obtain the cache data and the corresponding data storage address from the cache. In the practical application process, the cache data in the cache and the corresponding data storage address may also be stored in the cache model through manual input and the like, so as to be used for verifying the read-back data returned by the cache, and realize functional verification of the cache, which is not limited in the present application.
Illustratively, the cache in the present application, in response to a read request, determines whether a refetch flag exists in cache data corresponding to the read request, if so, determines that downstream data corresponding to the read request, which is obtained from the downstream behavior level model, is read-back data, and if not, may perform a conventional hit determination, for example, if hit, the cache data corresponding to the read request is used as read-back data, and if not, based on the read request, a data request is sent to the downstream behavior level model, and based on the downstream data returned by the downstream behavior level model, the read-back data is determined.
It should be noted that, in the present application, the cache data corresponding to the read request and obtained from the cache model may be cache data which is obtained from the cache model and is used as expected data, where the cache data is obtained in a case of obtaining read-back data, and the corresponding read request or a data storage address corresponding to the read request is determined based on the read-back data. When a read request input into the cache is acquired, cache data corresponding to the read request acquired from the cache model is taken as expected data, and when read-back data is acquired, the corresponding read request and the expected data are determined based on the read-back data. The method and the device for obtaining the cache data corresponding to the read-back data are not limited.
In an optional embodiment, the re-fetching mark request modifies the downstream data to be re-fetched into a default value for the downstream behavior-level model to update the downstream data, and obtains the downstream data to be re-fetched according to the updated downstream data.
Specifically, it may be understood that, when performing the re-fetch mark verification on the cache, the downstream data to be re-fetched may be modified to a default value, so that the downstream data is no longer the same as the corresponding cache data in the cache. The default value may be any value different from the current target cache data in the cache, for example, the default value may be a set initial value, or a value that is random and different from the current target cache data in the cache, or a value that is different from the current target cache data in the cache and is updated in response to a downstream data modification request transmitted by an external host, which is not limited by the present disclosure.
Furthermore, when the cache receives and reads the target cache data with the refetch mark, if the setting of the cache refetch mark is successful, the cache can reacquire the corresponding downstream data from the downstream behavior level model based on the refetch mark as readback data, the cache data in the cache is stored in the cache model, and when the refetch mark is verified, the cache data needs to be synchronously modified into the downstream data to be refetched. And when the expected data corresponding to the read request and obtained from the cache model are different from the read-back data, the read-back data returned based on the read request is cached and is not the downstream data obtained again from the downstream behavior level model based on the re-fetching mark, and the cache re-fetching mark is not set successfully.
In a preferred embodiment, the replay marking request is obtained by deleting downstream data to be replayed for the downstream behavior-level model to update the downstream data to be replayed and according to the updated downstream data to be replayed.
Specifically, the purpose of modifying the downstream data to be re-fetched into the default value can be achieved by directly deleting the downstream data to be re-fetched, for example, after the deletion operation is executed, the downstream data corresponding to the target data storage address in the downstream behavior-level model is directly updated to the initial value, so that the downstream data to be re-fetched of the downstream behavior-level model is updated more simply and conveniently.
In an optional embodiment, the cache model stores cache data in the cache and a corresponding data storage address. As shown in fig. 3, the method further includes, before obtaining the cache data corresponding to the read request from the cache model as the expected data:
s120: and determining a target data storage address of the cache data in the cache corresponding to the downstream data to be re-fetched according to the re-fetching mark request.
S130: and modifying the cache data corresponding to the target data storage address in the cache model into a default value.
The cache data corresponding to the target cache data in the cache model can be modified based on the downstream data to be fetched updated by the downstream behavior level model, so that the data corresponding to the same data storage address (corresponding to the address of the target cache data in the cache) in the downstream behavior level model and the cache model are the same and different from the target cache data corresponding to the data storage address in the cache.
Specifically, it is understood that, in a general case, if the cache refetching tag fails, the cache may return the cached data in the cache without retrieving the updated downstream data from the downstream behavioral level model. However, if the cache is abnormally operated, the data returned by the cache may not be the cache data with the refetch mark any more. In order to ensure the accuracy of the verification of the re-fetching mark, in this optional embodiment, after the downstream data to be re-fetched is modified to the default value, the cache data corresponding to the target cache data in the cache stored in the cache model may be further modified to the default value, when the readback data is verified, the expected data obtained from the cache model is the default value consistent with the downstream data to be re-fetched, and if the setting of the cache re-fetching mark is successful, the readback data returned by the cache is also the default value. Therefore, if the read-back data and the expected data are the same, namely the read-back data and the expected data are both default values, the cache returns updated downstream data obtained from the downstream behavior level model again, the re-fetching mark is successfully set, and the cache is in a normal running state. On the contrary, if the read-back data is different from the expected data, the read-back data is not a default value, the updated downstream data returned by the cache is not obtained from the downstream behavior level model again, the setting of the refetch flag is failed, and the running state of the cache is abnormal.
In a preferred embodiment, the downstream behavioral level model deletes downstream data to be refetched to update the downstream data to be refetched. The step S130 of modifying the cache data corresponding to the target data storage address in the cache model to a default value specifically includes:
s131: and deleting the cache data corresponding to the target data storage address in the cache model.
Specifically, it can be understood that, while the downstream data is modified to the default value by deleting the downstream data, the cache data corresponding to the target cache data in the cache model also needs to be deleted. The target data storage address in the corresponding cache model is determined through the downstream data to be re-fetched in the downstream behavior level model, and the cache data in the target data storage address is deleted, so that the purpose of synchronously deleting the cache data in the cache model can be realized.
In a preferred embodiment, as shown in fig. 4, the step S300 of checking the read-back data according to the expected data to determine whether the refetched mark is successfully set specifically includes:
s331: and comparing the read-back data with the expected data.
S332: and if the two are consistent, the setting of the re-fetching mark is successful.
Specifically, it may be understood that after the downstream data to be re-fetched is modified to the default value, if the cache re-fetching flag is successfully set, the read-back data returned by the cache based on the read request for obtaining the target cache data with the re-fetching flag is the default value obtained from the downstream behavior level model, and if the cache data correspondingly stored in the cache model is also modified to the default value, the default value corresponding to the read request may be obtained from the cache model as the expected data. For example, in a specific example, the downstream behavior level model stores downstream data a, the cache stores cache data a that is identical to the downstream data a, the downstream behavior level model updates the downstream data a to B, the downstream behavior level model forms a refetch tag request of the downstream data a, sends the refetch tag request to the cache, the cache determines, based on the refetch tag request, the downstream data a to be refetched and the cache data a to be provided with a refetch tag, and sets a refetch tag for the cache data a. Meanwhile, the cache data A in the cache stored in the cache model is modified into the cache data B, so that whether the reset mark is successfully set or not can be judged according to the cache data B in the cache model.
And when the cache receives a read request for reading the cache data A, identifying a refetching mark of the cache data A, then reacquiring downstream data B corresponding to the cache data A from the downstream behavior level model, and returning the downstream data B as readback data. If the read-back data is downstream data B and the corresponding cache data in the cache model is cache data B, the cache obtains the downstream data from the downstream behavior level model in real time and returns, and the setting of the re-fetching mark is successful. If the read-back data returns the cache data A, the cache does not acquire the updated downstream data B from the downstream behavior level model again according to the re-fetching mark, and the re-fetching mark is not set successfully.
In a preferred embodiment, the re-fetching marking request is obtained by the downstream behavior-level model receiving a downstream data modification request transmitted by an external host, determining downstream data to be re-fetched according to the downstream data modification request, updating the downstream data to be re-fetched, and obtaining the downstream data to be re-fetched according to the updated downstream data to be re-fetched.
Specifically, the downstream behavior level model may simulate a function of a cached downstream module, so that a downstream data modification request may be transmitted to the downstream behavior level model through an external host, the downstream behavior level model determines downstream data to be re-fetched that needs to be modified correspondingly after receiving the downstream data modification request, updates the downstream data to be re-fetched, and forms a re-fetch tagging request based on the modified downstream data.
In a preferred embodiment, as shown in fig. 5, the method further includes, before acquiring the read-back data returned by the cache based on the read request for reading the cache data with the refetch tag, S400:
s410: and carrying out parameterization extraction on the verification request input into the cache based on the parameter configuration of the cache to obtain verification parameters.
S420: and obtaining a universal verification request according to the verification parameters and a preset data structure.
S430: and determining whether the verification request is a read request for reading the cache data with the refetching mark according to the generalized verification request.
Specifically, it can be understood that, in the related art, different test environments need to be established for the cache verification of different cache types and functional characteristics to perform respective tests. In the preferred embodiment, a parameter configuration of the cache can be formed in advance according to the cache to be verified, and the parameter configuration of the cache can be used for performing parameterized extraction on verification requests of different cache types to obtain parameters of the same attribute in the verification requests of different cache types, so that the request type and other attributes of the verification requests can be identified conveniently.
The parameter configuration of the cache can be obtained according to the basic parameters of the cache to be verified. In an alternative embodiment, the parameter configuration of the cache may include basic parameters of the cache, such as input interface type, input interface number, input request address bit width, input request data bit width, cache line data bit width, and cache line initialization data. And extracting the verification parameters corresponding to the basic parameters from the verification request input into the cache according to the basic parameters configured by the parameters, thereby realizing the parameterized extraction process.
Furthermore, verification parameters are obtained through parameterization extraction, the extracted verification parameters are arranged in the corresponding preset data structures according to the preset data structures, and the purpose of converting different types of verification requests into universal verification requests is achieved. Therefore, after the verification requests of different cache types input into the cache are converted into the universal verification request, the request type of the converted universal verification request can be identified, and automatic cache verification is carried out. Therefore, the data re-fetching mark verification method can automatically identify the read request input into the cache. Furthermore, by setting a data structure, it can be further realized whether the cache data required to be read by the read request is the cache data with the re-fetching mark or not, so as to determine whether the verification process of the cache is the re-fetching mark verification or not, and adopt a corresponding verification process for the read-back data returned by the cache. Therefore, the verification method of the preferred embodiment can be used for the verification processes of different cache types such as read-only cache verification, read-write cache verification and other functional characteristics of the cache, different test environments do not need to be set up for different cache types, the labor cost of cache verification is greatly reduced, the verification time of cache verification is shortened, and the verification efficiency is improved.
The preset data structure can be set according to verification parameters obtained in the parameterization extraction process. In an alternative embodiment, the unified data structure may include fields of a request type (trans _ type), a request address (address), read-write data (data), a data mask (mask), a request label (id), and a user definition (user _ define). Wherein the request type indicates whether a read request or a write request; requesting the address mark to read and write; read-write data storage write data/read-back data; whether the data corresponding to the mask bit is written into the cache is identified by the data mask, for example, the data to be written corresponding to a write request may not be written completely, and a mask may be used to mark which data needs to be written into the cache, for example, 64-bit data is written, and 8-bit mask is available, so that the data actually written by the write request can be determined for subsequent consistency check; the request label means that each request has a unique label, and marks the corresponding relation between the read-back data and the request; the user definition supports the user to expand the data request structure in the verification process of different cache types. For example, the user may define the valid bit of the write request at a user-defined position, if 1, the write request is invalid and will not be written into the cache, and the write request is directly discarded, and if 0, the write request is valid and will be normally written into the cache. Of course, in practical applications, those skilled in the art may perform user definition according to actual requirements to adapt to various verification scenarios of the cached data, which is not limited in this application.
In alternative embodiments, the authentication request entered into the cache may be obtained by an interceptor or by placing a listener in the cache. Of course, in other embodiments, the verification request input to the cache may be obtained in other manners, which is not limited in this application.
In a preferred embodiment, as shown in fig. 6, the method further includes a step of storing the read request to a request queue before obtaining the cache data corresponding to the read-back data from the cache model as expected data.
The step S300 of obtaining the cache data corresponding to the read request from the cache model as the expected data specifically includes:
s310: and acquiring a read request corresponding to the read-back data from the request queue according to the read-back data returned by the cache.
S320: and obtaining cache data corresponding to the read-back data from a cache model as expected data according to the read request.
In particular, it will be appreciated that in the preferred embodiment, a request queue may be pre-configured, which may be used to store read requests. The retrieved read request may be stored to the request queue. When the read-back data returned by the cache based on the read request is obtained from the cache, the read request corresponding to the read-back data can be obtained from the request queue, corresponding expected data, namely downstream data to be re-fetched corresponding to the read request, is obtained according to the read request, and the read-back data is verified according to the expected data.
In one or more embodiments, received read requests may be stored to the request queue in sequence, and meanwhile, read-back data returned by the cache is also returned in sequence. And sequentially acquiring the read requests from the request queue according to the received read-back data, namely the read requests corresponding to the current read-back data. In other embodiments, the request label may be set in the read request, the received read request may also be stored to the request queue out of order, and the read-back data returned by the cache needs to set the request label, so that the read request corresponding to the request label may be obtained from the request queue according to the request label in the read-back data.
In an optional embodiment, the cache model stores cache data in the cache and a corresponding data storage address. As shown in fig. 7, the step S330 of acquiring corresponding expected data according to the read request specifically includes:
s331: and determining a data storage address in the cache corresponding to the read-back data according to the read request.
S332: and acquiring downstream data corresponding to the data storage address from the downstream behavior level model according to the data storage address.
S333: the downstream data is taken as the expected data corresponding to the read request.
Specifically, in an optional embodiment, the cache model stores cache data in the cache and a corresponding data storage address, and the cache data in the cache model needs to at least include cache data with a refetching flag in the cache. In other alternative embodiments, when the cache receives a write request, the write request input to the cache may be obtained, and the cache model may be updated according to the write request. Specifically, similarly, the verification request input into the cache may be parameterized and extracted, and the generalized verification request may be obtained by converting the preset data structure, and the generalized verification request may be identified to determine the request type of the verification request. If the request type is a write request, determining a data storage address written into the cache and cache data written correspondingly according to the write request, and storing the data storage address and the cache data written correspondingly into the cache model.
Thus, the cache model records the write operation of the verification request to the cache and all write requests downstream of the cache. And when the verification request is a read request, determining a read address in the verification request, wherein the cache model has a data storage address and cache data, which are corresponding to the read address and used for performing data writing operation on the pre-write request, and the cache data corresponding to the read address can be obtained from the cache model according to the data storage address corresponding to the read address to serve as expected data.
In a preferred embodiment, as shown in fig. 8, the method further includes S500, before checking the read-back data according to the expected data:
s510: storing the expected data to an expected data queue.
S520: and acquiring expected data corresponding to the read request from the expected data queue.
In particular, it will be appreciated that in the preferred embodiment, a prospective data queue can be pre-set, which can be used to store prospective data. Therefore, in one or more embodiments, when the read-back data is checked and the read-back data is returned according to the sequence of receiving the verification requests by the cache, the expected data can be sequentially acquired according to the sequence of receiving the verification requests and stored in the expected data queue. When the read-back data is verified, the expected data is sequentially acquired from the expected data queue according to the sequence of the read-back data, namely the expected data corresponding to each read-back data, the acquired read-back data can be verified according to the expected data to determine whether the replay mark is successfully set, the acquisition speed of the expected data can be increased by setting the expected data queue, the verification efficiency of the read-back data is improved, and the cache verification efficiency is further improved.
In other embodiments, if the request label is set in the verification request, the expected data obtained according to the verification request and the corresponding request label may be correspondingly set in the expected data queue. Furthermore, a request label needs to be set in the read-back data returned by the cache, and the expected data corresponding to the request label can be obtained from the expected data queue according to the request label in the read-back data for consistency check.
Based on the same principle, the embodiment also discloses a cache data re-fetching mark verification device. As shown in fig. 9, in the present embodiment, the apparatus includes a control module 11, a cache model 12, and a downstream behavior level model 13.
The downstream behavior level model 13 updates downstream data to be re-fetched to form a re-fetch flag request, and sends the re-fetch flag request to a cache so that the cache responds to the re-fetch flag request sent by the downstream behavior level model 13, and sets a re-fetch flag for target cache data corresponding to the downstream data to be re-fetched updated by the downstream behavior level model 13 in the cache, where the re-fetch flag is used to indicate that when the cache receives a read request for the target cache data, the downstream data corresponding to the read request acquired from the downstream behavior level model 13 is determined as read-back data.
The cache data stored in the cache model 12 and corresponding to the target cache data is the downstream data to be fetched again.
The control module 11 is configured to obtain readback data returned by the cache based on a read request for reading target cache data with a refetching flag, obtain cache data corresponding to the read request from a cache model as expected data, and check the readback data according to the expected data to determine whether the refetching flag is set successfully.
In a preferred embodiment, as shown in fig. 10, the apparatus further includes a parameterization module 10, configured to, before obtaining read-back data returned by the cache based on a read request for reading the cache data with the refetching flag, parameterize, based on a parameter configuration of the cache, a verification request input to the cache to obtain a verification parameter; obtaining a universal verification request according to the verification parameters and a preset data structure; and determining whether the verification request is a read request for reading the cache data with the refetching mark according to the generalized verification request.
In a preferred embodiment, as shown in fig. 11, the control module 11 is specifically configured to store the read request to a request queue 111 before obtaining expected data corresponding to the read-back data; and acquiring a read request corresponding to the read-back data from the request queue 111 according to the read-back data returned by the cache, and acquiring cache data corresponding to the read-back data from a cache model as expected data according to the read request.
In a preferred embodiment, the cache model 12 stores cache data in the cache and a corresponding data storage address. The control module is specifically configured to determine, according to the read request, a data storage address in the cache corresponding to the read-back data; acquiring downstream data corresponding to the data storage address from the downstream behavior level model according to the data storage address; the downstream data is taken as the expected data corresponding to the read request.
In a preferred embodiment, as shown in fig. 12, the control module 11 is specifically configured to store the expected data to an expected data queue 112; the expected data corresponding to the read request is retrieved from the expected data queue 112.
In an optional embodiment, the downstream behavior level model 13 is specifically configured to modify the downstream data to be re-fetched into a default value to update the downstream data stored in the downstream behavior level model, and obtain a re-fetching marking request according to the updated downstream data to be re-fetched.
In a preferred embodiment, the downstream behavior level model 13 is specifically configured to delete the downstream data to be re-fetched to modify the downstream data to be re-fetched into a default value, and obtain a re-fetch marking request according to the updated downstream data to be re-fetched.
In an optional embodiment, the cache model 12 stores cache data corresponding to target cache data in the cache and a corresponding data storage address; the control module 11 is specifically configured to determine, according to the refetching flag request, a target data storage address of cache data in the cache corresponding to downstream data to be refetched; and modifying the cache data corresponding to the target data storage address in the cache model 12 into a default value.
In a preferred embodiment, the downstream behavior level model 13 is specifically configured to delete the downstream data to be retrieved; the control module 11 is specifically configured to delete the cache data corresponding to the target data storage address in the cache model.
In a preferred embodiment, the control module 11 is specifically configured to compare the read-back data with the expected data; if the two marks are consistent, the setting of the re-fetching mark is successful.
In a preferred embodiment, the downstream behavior level model 13 is specifically configured to receive a downstream data modification request transmitted by an external host before updating downstream data stored in the downstream behavior level model; and determining downstream data to be re-fetched according to the downstream data modification request, updating the downstream data to be re-fetched, and obtaining a re-fetching marking request according to the updated downstream data to be re-fetched.
Since the principle of the device for solving the problems is similar to the method, the implementation of the device can refer to the implementation of the method, and the detailed description is omitted here.
Based on the same principle, the embodiment also discloses a cache verification system. The cache verification system comprises a cache and the cache verification device according to the embodiment.
Since the principle of solving the problem of the system is similar to the above method, the implementation of the system can refer to the implementation of the method, and details are not described herein.
The systems, devices, modules or units described in the above embodiments may be implemented by a computer chip or an entity, or by a computer program product with certain functions. A typical implementation device is a computer device, which may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
In a typical example, the computer device specifically comprises a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method performed by the client as described above when executing the program, or the processor implementing the method performed by the server as described above when executing the program.
Referring now to FIG. 13, shown is a schematic block diagram of a computer device 600 suitable for use in implementing embodiments of the present application.
As shown in fig. 13, the computer apparatus 600 includes a Central Processing Unit (CPU) 601 that can execute various appropriate jobs and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data necessary for the operation of the system 600 are also stored. The CPU601, ROM602, and RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output section 607 including a Cathode Ray Tube (CRT), a liquid crystal feedback (LCD), and the like, and a speaker and the like; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted as necessary on the storage section 608.
In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609 and/or installed from the removable medium 611.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the various elements may be implemented in the same one or more pieces of software and/or hardware in the practice of the present application.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the system embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and reference may be made to the partial description of the method embodiment for relevant points.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (14)

1. A cache data re-fetching mark verification method is characterized by comprising the following steps:
obtaining read-back data returned by the cache based on a read request for reading target cache data with a re-fetching mark, and obtaining cache data corresponding to the read request from a cache model as expected data;
verifying the readback data against the expected data to determine whether the replay flag is set successfully,
the replay mark is a mark set by a cache in response to a replay mark request sent by a downstream behavior level model and on target cache data corresponding to downstream data to be replayed updated by the downstream behavior level model in the cache, the replay mark is used for indicating that the cache determines downstream data corresponding to a read request, which is obtained from the downstream behavior level model and corresponds to the read request, as read back when the read request on the target cache data is received, and the cache data corresponding to the target cache data stored in the cache model is the downstream data to be replayed.
2. The cached data re-fetch mark validation method of claim 1, wherein the re-fetch mark request modifies downstream data to be re-fetched to a default value for the downstream behavioral level model to update the downstream data, obtained according to the updated downstream data to be re-fetched.
3. The cached data re-fetching mark validation method as claimed in claim 2, wherein the re-fetching mark request deletes the downstream data to be re-fetched for the downstream behavioral level model to update the downstream data to be re-fetched, and is obtained according to the updated downstream data to be re-fetched.
4. The verification method for cache data refetching marks according to claim 1, wherein the cache model stores a data storage address corresponding to the cache data;
the method further comprises the following steps of obtaining cache data corresponding to the read request from a cache model as expected data, before:
determining a target data storage address of cache data in the cache corresponding to downstream data to be re-fetched according to the re-fetching mark request;
and modifying the cache data corresponding to the target data storage address in the cache model into a default value.
5. The cached data retrieval tag validation method of claim 4, wherein the downstream behavior-level model deletes downstream data to be retrieved to update the downstream data to be retrieved;
the modifying the cache data corresponding to the target data storage address in the cache model to a default value specifically includes:
and deleting the cache data corresponding to the target data storage address in the cache model.
6. The method according to any one of claims 1 to 5, wherein the checking the read-back data according to the expected data to determine whether the replay flag is set successfully includes:
comparing the read-back data with the expected data;
if the two marks are consistent, the setting of the re-fetching mark is successful;
if not, the setting of the re-fetching mark fails.
7. The verification method for the cache data refetching marker according to claim 6, wherein the refetching marker request is obtained by the downstream behavior level model receiving a downstream data modification request transmitted by an external host, determining downstream data to be refetched according to the downstream data modification request, updating the downstream data to be refetched, and according to the updated downstream data to be refetched.
8. The cached data refetching marker validation method as recited in claim 6, further comprising, prior to obtaining the read-back data returned by the cache based on a read request to read the cached data with the refetching marker:
carrying out parameterization extraction on the verification request input into the cache based on the parameter configuration of the cache to obtain a verification parameter;
obtaining a universal verification request according to the verification parameters and a preset data structure;
and determining whether the verification request is a read request for reading the cache data with the refetching mark according to the generalized verification request.
9. The verification method for the cache data refetching marker according to claim 6, further comprising, before obtaining the cache data corresponding to the read-back data from the cache model as expected data:
storing the read request to a request queue;
the obtaining the cache data corresponding to the read request from the cache model as expected data includes:
obtaining a read request corresponding to the read-back data from the request queue according to the read-back data returned by the cache;
and obtaining cache data corresponding to the read-back data from a cache model as expected data according to the read request.
10. The method of claim 6, further comprising, prior to checking the read-back data against the expected data:
storing the expected data to an expected data queue;
and acquiring expected data corresponding to the read request from the expected data queue.
11. A cache data re-fetching mark verification device is characterized by comprising a downstream behavior level model, a cache model and a control module;
the downstream behavior level model is used for updating downstream data to be re-fetched to form a re-fetching mark request, sending the re-fetching mark request to a cache so that the cache responds to the re-fetching mark request sent by the downstream behavior level model, and setting a re-fetching mark for target cache data corresponding to the downstream data to be re-fetched updated by the downstream behavior level model in the cache, wherein the re-fetching mark is used for indicating that when the cache receives a read request for the target cache data, the downstream data corresponding to the read request acquired from the downstream behavior level model is determined as read-back data;
the cache model is used for storing the cache data corresponding to the target cache data as the downstream data to be fetched again;
the control module is used for acquiring read-back data returned by the cache based on a read request for reading target cache data with a refetching mark, acquiring cache data corresponding to the read request from a cache model as expected data, and checking the read-back data according to the expected data to determine whether the refetching mark is successfully set.
12. A cache verification system comprising a cache and the cache data refetching tag verification apparatus of claim 11.
13. A computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor,
the processor, when executing the program, implements the method of any of claims 1-10.
14. A computer-readable medium, having stored thereon a computer program,
the program when executed by a processor implements the method of any one of claims 1 to 10.
CN202310010875.9A 2023-01-05 2023-01-05 Cache data re-fetching mark verification method, device and system Active CN115756998B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310010875.9A CN115756998B (en) 2023-01-05 2023-01-05 Cache data re-fetching mark verification method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310010875.9A CN115756998B (en) 2023-01-05 2023-01-05 Cache data re-fetching mark verification method, device and system

Publications (2)

Publication Number Publication Date
CN115756998A CN115756998A (en) 2023-03-07
CN115756998B true CN115756998B (en) 2023-03-31

Family

ID=85348207

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310010875.9A Active CN115756998B (en) 2023-01-05 2023-01-05 Cache data re-fetching mark verification method, device and system

Country Status (1)

Country Link
CN (1) CN115756998B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012172245A1 (en) * 2011-06-17 2012-12-20 Morpho Secure transfer between non-volatile memory and volatile memory
CN105868127A (en) * 2016-03-23 2016-08-17 北京经纬恒润科技有限公司 Data storage method and device and data reading method and device
CN114153680A (en) * 2021-12-14 2022-03-08 广东赛昉科技有限公司 Verification method and system for second-level cache interface protocol
CN114860159A (en) * 2022-04-14 2022-08-05 深圳市正浩创新科技股份有限公司 Data access method and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012172245A1 (en) * 2011-06-17 2012-12-20 Morpho Secure transfer between non-volatile memory and volatile memory
CN105868127A (en) * 2016-03-23 2016-08-17 北京经纬恒润科技有限公司 Data storage method and device and data reading method and device
CN114153680A (en) * 2021-12-14 2022-03-08 广东赛昉科技有限公司 Verification method and system for second-level cache interface protocol
CN114860159A (en) * 2022-04-14 2022-08-05 深圳市正浩创新科技股份有限公司 Data access method and electronic equipment

Also Published As

Publication number Publication date
CN115756998A (en) 2023-03-07

Similar Documents

Publication Publication Date Title
CN111414389B (en) Data processing method and device, electronic equipment and storage medium
CN105516230B (en) A kind of data processing method and device
CN112765023A (en) Test case generation method and device
CN111026765B (en) Dynamic processing method, device, storage medium and apparatus for strictly balanced binary tree
CN113448862B (en) Software version testing method and device and computer equipment
CN113704110A (en) Automatic testing method and device for user interface
CN110990346A (en) File data processing method, device, equipment and storage medium based on block chain
CN110222046B (en) List data processing method, device, server and storage medium
CN106990974B (en) APP updating method and device and electronic equipment
CN110908907A (en) Web page testing method, device, equipment and storage medium
CN106156291A (en) The caching method of static resource and system thereof based on Localstroage
CN116627331B (en) Cache verification device, method and system
CN115756998B (en) Cache data re-fetching mark verification method, device and system
CN110119388B (en) File reading and writing method, device, system, equipment and computer readable storage medium
CN112650689A (en) Test method, test device, electronic equipment and storage medium
CN111159040A (en) Test data generation method, device, equipment and storage medium
EP3264254B1 (en) System and method for a simulation of a block storage system on an object storage system
CN115826875B (en) Cache data invalidation verification method, device and system
CN113064895B (en) Incremental updating method, device and system for map
CN111737090B (en) Log simulation method and device, computer equipment and storage medium
CN114637672A (en) Automatic data testing method and device, computer equipment and storage medium
CN110266610B (en) Traffic identification method and device and electronic equipment
CN107885839B (en) Method and device for reading information in Word file
CN108694219B (en) Data processing method and device
CN110750569A (en) Data extraction method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant