CN118035022B - Cache verification method, device, equipment, medium and program product - Google Patents

Cache verification method, device, equipment, medium and program product Download PDF

Info

Publication number
CN118035022B
CN118035022B CN202410446204.1A CN202410446204A CN118035022B CN 118035022 B CN118035022 B CN 118035022B CN 202410446204 A CN202410446204 A CN 202410446204A CN 118035022 B CN118035022 B CN 118035022B
Authority
CN
China
Prior art keywords
cache line
request
cache
access
hit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410446204.1A
Other languages
Chinese (zh)
Other versions
CN118035022A (en
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bi Ren Technology Co ltd
Beijing Bilin Technology Development Co ltd
Original Assignee
Shanghai Bi Ren Technology Co ltd
Beijing Bilin Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bi Ren Technology Co ltd, Beijing Bilin Technology Development Co ltd filed Critical Shanghai Bi Ren Technology Co ltd
Priority to CN202410446204.1A priority Critical patent/CN118035022B/en
Publication of CN118035022A publication Critical patent/CN118035022A/en
Application granted granted Critical
Publication of CN118035022B publication Critical patent/CN118035022B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The embodiment of the application provides a cache verification method, a device, equipment, a medium and a program product, which are used for improving the accuracy of a function verification result of a hit test module in a cache. The method comprises the following steps: and constructing an SV reference model, respectively inputting access requests to the first RTL design and the SV reference model, and comparing a first execution result output by the first RTL design with a second execution result output by the SV reference model one by one to obtain a verification result of the hit test module. Because the sequential logic corresponding to the access processing function in the SV reference model implementation hit test module is consistent with the sequential logic corresponding to the access processing function implemented by the first RTL design, the sequential alignment can be implemented by the second execution result output by the SV reference model and the first execution result output by the first RTL design, so that the accuracy of the functional verification result of the hit test module in the cache can be improved.

Description

Cache verification method, device, equipment, medium and program product
Technical Field
The embodiment of the application relates to the technical field of chip verification, in particular to a cache verification method, a device, equipment, a medium and a program product.
Background
In the chip design process, the chip design needs to be verified to ensure the correctness of the chip design, so that the chip produced later can be ensured to meet the design purpose and the expected function. For chip design integrated with cache (cache), verifying the cache is an important element in the chip design process.
In the process of verifying the cache, a hit test (hit-test) module in the cache is usually required to be verified, in the related art, a C model is adopted, the whole cache can only be treated as a black box, functions of key modules such as the hit-test cannot be simulated, and the output result of the C model cannot be ensured to be consistent with the output result of the hit-test module from the angle of time sequence consistency, so that accurate comparison of the output results of some key modules cannot be realized, and the correctness of the cache cannot be accurately verified.
Disclosure of Invention
The embodiment of the application provides a cache verification method, a device, equipment, a medium and a program product, which are used for improving the accuracy of a function verification result of a hit test module in a cache.
In a first aspect, an embodiment of the present application provides a method for verifying a cache, where the cache includes a hit test module, the method including: constructing an SV reference model, wherein the SV reference model is obtained by modeling based on sequential logic corresponding to an access processing function in an SV language simulation hit test module; respectively inputting an access request to a first hardware logic code RTL design and an SV reference model, wherein the first RTL design is an RTL description of a time sequence circuit corresponding to an access processing function in a hit test module; acquiring a first execution result output by a first RTL design and a second execution result output by an SV reference model; the method comprises the steps that a first execution result is obtained by processing an access request through a first RTL design, and a second execution result is obtained by processing the access request through an SV reference model; and comparing the first execution result with the second execution result one by one to obtain a verification result of the hit test module.
By the method, the SV reference model can realize the same access processing function as the first RTL design, and the sequential logic corresponding to the access processing function realized by the SV reference model is consistent with the sequential logic corresponding to the access processing function realized by the first RTL design, so that the second execution result output by the SV reference model and the first execution result output by the first RTL design can realize sequential alignment under the condition that the same access request is input to be consistent, and therefore, the first execution result output by the first RTL design is verified through the second execution result output by the SV reference model, the accuracy of the functional verification result of the hit test module in the cache can be improved, and the accuracy of the cache can be accurately verified.
In a second aspect, an embodiment of the present application provides a verification apparatus for a cache, where the cache includes a hit test module, the verification apparatus includes:
The construction unit is used for constructing an SV reference model, and the SV reference model is obtained by modeling based on sequential logic corresponding to access processing functions in the SV language simulation hit test module;
the input unit is used for respectively inputting an access request to a first hardware logic code RTL design and the SV reference model, wherein the first RTL design is an RTL description of a time sequence circuit corresponding to an access processing function in the hit test module;
The monitoring unit is used for acquiring a first execution result output by the first RTL design and a second execution result output by the SV reference model, and sending the first execution result and the second execution result to the verification unit; the first execution result is obtained by processing the access request by the first RTL design, and the second execution result is obtained by processing the access request by the SV reference model;
And the verification unit is used for comparing the first execution result with the second execution result one by one to obtain a verification result of the hit test module.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor and a memory;
The memory is used for storing program instructions and data;
The processor is configured to invoke program instructions and data in the memory to perform the method provided in any of the foregoing aspects or any of the possible implementations of any of the foregoing aspects.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium comprising computer-executable instructions for causing a computer to perform a method as provided in any of the foregoing aspects or any of the possible implementations thereof when run on the computer.
In a fifth aspect, embodiments of the present application provide a computer program product storing a computer program comprising program instructions which, when executed by a computer, cause the computer to perform the method provided in any of the foregoing aspects or any of the possible implementations thereof.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it will be apparent that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a multi-core processor chip according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an architecture according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a cache verification method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a verification system according to an embodiment of the present application;
FIG. 5 is a schematic diagram of another verification system according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a verification device according to an embodiment of the present application;
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following explains the related technical features related to the embodiments of the present application. It should be noted that these explanations are for easier understanding of the embodiments of the present application, and should not be construed as limiting the scope of protection claimed by the present application.
In a processor chip, at least one processor core and a multi-level memory structure are generally included, as shown in fig. 1, a processor chip with multiple cores is illustrated, including a processor core 1, a processor core 2, and a multi-level memory structure including a first level cache (L1-cache) and a second level cache (L2-cache) private to the processor core 1, L1-cache and L2-cache private to the processor core 2, and a third level cache shared by the processor core 1 and the processor core 2. For any processor core, taking processor core 1 as an example, the private L1-cache of processor core 1 is closer to the processor core than the L2-cache, and the L2-cache is closer to the processor core than the L3-cache. The next-level memory of the L1-cache is L2-cache, the next-level memory of the L2-cache is L3-cache, and the next-level memory of the L3-cache can be main memory.
The processor chip may be, for example, a central processing unit (Central Processing Unit, CPU for short), a graphics processor (Graphics Processing Unit, GPU for short), a General-purpose graphics processor (General-Purpose Computing on Graphics Processing Units, GPGPU for short), or the like.
For example, the above multi-core processor chip is a GPU, where the processor core 1 and the processor core 2 included therein may be a stream processor cluster (Stream Processor Cluster, abbreviated as SPC).
FIG. 2 is a schematic diagram of an architecture including a cache (next level memory) and a next level memory, to which embodiments of the present application are applicable.
The cache (cache) includes a hit-test (hit-test) module and a data store (dataram) that includes a plurality of cache lines (cacheline), such as cache line 1 and cache line 2 are shown by way of example in FIG. 2, and the application is not limited to the number of cacheline that dataram includes.
The hit-test module includes a module for implementing an access processing function, and for convenience of description, the module for implementing the access processing function is referred to as an access processing function module in the embodiment of the present application. The access processing function comprises updating state information corresponding to each cacheline in the cache, a hit test function and an access control function, wherein the hit test function is mainly used for performing cache line hit test according to address information indicated by an access request and state information corresponding to each cacheline in the cache to obtain a test result, and if the address tag corresponding to cacheline in the cache is the same as the address information indicated by the access request, the test result is a hit result; if the address tag corresponding to cacheline does not exist in the cache and the address information indicated by the access request is the same, the test result is a miss result; the access control function is used to generate a request to access a dataram, or to generate a request to access a next level memory (memory).
In the embodiment of the present application, the hit-test module may further include a first functional module connected to the input end of the access processing functional module and a second functional module connected to the output end of the access processing functional module, where the first functional module may be a module integrated with one or more functions, for example, the first functional module includes, but is not limited to, an arbitration module, a first state machine, and the like; the second functional module may also be a module that integrates one or more functions, including but not limited to a second state machine; the embodiment of the application is not limited to the functions of the first functional module and the second functional module. In terms of the processing procedure of the access request in the hit-test module, after the access request is input to the hit-test module, the access request is processed by the first functional module, the access functional module and the second functional module in sequence, and finally, the result is output from the hit-test module to other functional modules in the cache.
It should be noted that, the cache in the embodiment of the present application may further include a computing module, where the computing module may execute a computing task, and may store an execution result in a dataram. By way of example, the computation module may be a computation Unit, such as an arithmetic Logic Unit (ARITHMETIC AND Logic Unit, ALU for short); for another example, the computing module may include at least one computing array, each of which may include at least one computing unit. Other functional modules may also be included in the cache, and are not described here again.
The cache in the embodiment of the application can be any one of the L1-cache, the L2-cache and the L3-cache shown in FIG. 1. For example, the cache is L1-cache, and the next-level memory (memory) of the cache is L2-cache; for example, the cache is L2-cache, and the next-level memory (memory) of the cache is L3-cache; for example, the cache is an L2-cache, and a next-level memory (memory) of the cache is a main memory.
In the chip design process of integrating a cache (cache), in order to make the cache meet the design purpose and the expected function, the cache needs to be subjected to function verification, and the most critical functional module of the cache is a hit-test module, so that the correctness of the hit-test module in the cache needs to be ensured. In the related art, the c model is adopted, the whole cache can only be treated as a black box, the functions of key modules such as hit test (hit-test) cannot be simulated, and the consistency of the output result of the c model and the output result of the hit-test module cannot be ensured from the angle of time sequence consistency, so that the accurate comparison of the output results of some key modules cannot be realized, and the correctness of the cache cannot be accurately verified.
In view of this, the present application provides a cache verification method based on the architecture diagram shown in fig. 2, where the cache verification method may be performed by an electronic device, and a verification platform, for example, a unified verification methodology (Universal Verification Methodology, abbreviated as UVM) verification platform, may be deployed on the electronic device. The cache includes a hit-test module, as shown in fig. 3, the cache verification method provided by the present application includes the following steps:
step 301, an SV reference model is constructed, which is obtained by modeling based on sequential logic corresponding to an access processing function in an SV language simulation hit test module. Wherein SV is an abbreviation of System Verilog, which is a hardware description language.
In one possible implementation, a first hardware logic code (RTL) design, specifically an RTL description of the sequential circuit corresponding to the access processing function inside the hit-test module, may be constructed based on the sequential circuit corresponding to the access processing function inside the hit-test module. In addition, an SV reference model needs to be constructed to verify the first RTL design so as to realize the verification of the access processing function inside the hit-test module.
Specifically, based on the SV reference model obtained by modeling the sequential logic corresponding to the access processing function in the SV language simulation hit-test module, the access processing function can be realized, and the sequential logic corresponding to the realized access processing function is consistent with the sequential logic corresponding to the access processing function in the first RTL design, so that the output result of the SV reference model is aligned with the output result of the first RTL design in time sequence, and the accurate comparison is realized.
According to the application, an SV reference model can be constructed on the verification platform, and the verification process of the first RTL design is realized through the verification platform. The verification system is described below in connection with fig. 4.
As shown in fig. 4, the verification system includes a UVM verification platform including a drive (driver), a monitor (monitor) 1, a monitor (monitor) 2, an SV reference model, and a comparator (scoreboard), and a first RTL design.
The driver is configured to apply different incentives to the first RTL design to be verified, where in the embodiment of the present application, the incentives applied to the first RTL design may be access requests, and the types of the access requests are, for example, a read request, a write request, a flush request, a calculation request, and the like; after the first RTL design receives the input access request, a first execution result that needs to be verified is generated.
Monitor1 is used for monitoring the input end of the first RTL design, and sending the monitored input content (i.e. the access request) to the SV reference model, so that the SV reference model can be ensured to be consistent with the input content of the first RTL design, and the input timing of the access request input to the SV reference model can be aligned with the input timing of the access request input to the first RTL design.
Monitor2 is used for monitoring the first execution result of the first RTL design output, and sending the monitored first execution result to the comparator.
The SV reference model is processed according to the same input content (i.e. access request) as the first RTL design, and the second execution result obtained by the processing is sent to the comparator.
The comparator is used for comparing the second execution result output by the received SV reference model with the first execution result output by the first RTL design so as to determine a verification result of the first RTL design according to the comparison result.
In the embodiment of the application, the SV reference model is used for realizing the same access processing function as the first RTL design, and the sequential logic corresponding to the access processing function realized by the SV reference model is consistent with the sequential logic corresponding to the access processing function realized by the first RTL design, so that the output execution results of the SV reference model and the first RTL design can be compared under the condition of consistent input, and whether the first execution result output by the first RTL design meets the expectations can be verified.
Step 302, an access request is input to a first RTL design and an SV reference model, respectively.
In connection with the verification system shown in FIG. 4, an access request is input to the first RTL design by the driver, and when the access request is monitored at the input of the first RTL design, monitor 1 sends the access request to the SV reference model.
Step 303, obtaining a first execution result output by the first RTL design and a second execution result output by the SV reference model.
The first execution result is obtained by processing the access request through the first RTL design, and the second execution result is obtained by processing the access request through the SV reference model.
In connection with the verification system shown in FIG. 4, monitor2 monitors the output of the first RTL design, and when monitor2 monitors the first execution result output by the first RTL design, the first execution result is sent to the comparator (scoreboard). Correspondingly, the second execution result output by the SV reference model is directly sent to scoreboard.
And step 304, comparing the first execution result with the second execution result one by one to obtain a verification result of the hit test module.
With reference to the verification system shown in fig. 4, scoreboard may compare the first execution result with the second execution result piece by piece to obtain a verification result of the hit test module.
Specifically, if the first execution result is consistent with the second execution result, the access processing function verification of the hit test module is determined to pass. If the first execution result is inconsistent with the second execution result, determining that the access processing function verification of the hit test module is not passed.
In the embodiment of the application, the SV reference model can realize the same access processing function as the first RTL design, and the sequential logic corresponding to the access processing function realized by the SV reference model is consistent with the sequential logic corresponding to the access processing function realized by the first RTL design, so that the second execution result output by the SV reference model and the first execution result output by the first RTL design can realize time alignment under the condition of consistent input of the same access request, therefore, the first execution result output by the first RTL design is verified through the second execution result output by the SV reference model, the accuracy of the functional verification result of a hit-test module in a cache can be improved, and the accuracy of the cache can be accurately verified.
Based on the above embodiment, the second execution result of step 303 is that the SV reference model processes the access request, and various possible embodiments are described below. It should be understood that, the first execution result of step 303 is obtained by processing the access request by the first RTL design, and the specific implementation manner may refer to the second execution result as an implementation manner obtained by processing the access request by the SV reference model, which is not described in detail in the present disclosure.
In the first embodiment, a cache line hit test may be performed according to address information indicated by an access request and state information corresponding to each cache line in a cache maintained by an SV reference model, to obtain a first test result; then, at least one access processing operation corresponding to the access request is generated based on the first test result. The second execution result may include at least one access processing operation.
In the embodiment of the present application, the state information corresponding to each cache line in the cache includes, but is not limited to, the following: address tag (tag)/valid (valid) flag/dirty (dirty) flag/whether replacement (replace)/replace information is still being read/allowed, etc.
In the embodiment of the application, the cache line hit test can be performed by comparing the address information carried by the access request with the address tags corresponding to the cache lines, if the address tags corresponding to the access request are consistent with the address information carried by the access request and valid indications corresponding to the cache lines with the address tags consistent with the address information carried by the access request are valid, the first test result is a hit result, and the cache line with the address tags consistent with the address information carried by the access request is the hit cache line; if no address tag is consistent with the address information carried by the access request in the address tags corresponding to the cache lines respectively, or if the address tag is consistent with the address information carried by the access request in the address tags corresponding to the cache lines respectively, and valid indication corresponding to the cache line with the address tag consistent with the address information carried by the access request is invalid, the first test result is a miss result.
In some scenarios, if there is a valid indication corresponding to a cache line whose address tag is consistent with the address information carried by the access request and whose address tag is consistent with the address information carried by the access request in the address tags corresponding to the cache lines respectively, and the data corresponding to the address tag is still in reading, the first test result may be determined as hom (i.e., hit on mis).
Wherein the at least one access processing operation may include, but is not limited to, at least one of the following (1) - (6):
(1) Updated state information corresponding to the hit cache line;
(2) A request to access the hit cache line;
(3) A cache line reassigned for the access request;
(4) Updated state information corresponding to the redistributed cache line;
(5) A request to access the reallocated cache line;
(6) A request to access a next level of memory of the cache.
Based on the first embodiment, the type of the access request may be a read request or a write request, and based on different access requests, corresponding first test results are different, and accordingly, specific implementation of at least one access processing operation corresponding to the access request generated based on the first test results is also different. The following is a detailed description of several cases.
In a first case, the access request is used for reading data corresponding to the first address information, the first test result is a hit result, and at least one access processing operation corresponding to the access request is generated based on the first test result, which can be implemented by the following ways: updating the state information corresponding to the hit cache line based on the hit result to obtain updated state information corresponding to the hit cache line, and generating a first read request, wherein the first read request is used for reading data corresponding to the first address information from the hit cache line. In this case, the at least one access processing operation includes a first read request. For example, if the content included in the second execution result is the first read request, the first execution result is consistent with the second execution result; if the content included in the first execution result is not the first read request, it is indicated that the first execution result is inconsistent with the second execution result.
In some examples, the at least one access processing operation may further include updated state information corresponding to the hit cache line, that is, the content included in the second execution result is updated state information corresponding to the hit cache line and the first execution result is consistent with the second execution result if the content included in the first execution result is also updated state information corresponding to the hit cache line and the first read request; if the content included in the first execution result is not the updated state information corresponding to the first read request and the hit cache line, the first execution result and the second execution result are inconsistent.
In the following, reference may be made to examples herein for how to compare the first execution result with the second execution result item by item in each case, which will not be described in detail.
In the second case, the access request is used for reading data corresponding to the first address information, the first test result is a miss result, and at least one access processing operation corresponding to the access request is generated based on the first test result, which can be implemented by the following ways:
Reallocating the cache line for the missed access request based on the miss result;
If the reallocation of the cache line fails, generating a second read request; the second read request is used for reading data corresponding to the first address information from a next-level memory of the cache; in this case, the at least one access processing operation includes a second read request.
If the reallocation of the cache line is successful and the dirty data mark corresponding to the reallocated cache line is invalid, wherein the dirty data mark is invalid, which indicates that the data in the reallocated cache line is consistent with the data corresponding to the same address information in the next-level memory, and the data in the reallocated cache line does not need to be stored in the next-level memory, the state information corresponding to the reallocated cache line is updated, updated state information corresponding to the reallocated cache line is obtained, and a third read request and a first write request are generated; the first read request is used for reading data corresponding to the first address information from the next-level memory of the cache, and the first write request is used for writing the data corresponding to the first address information read from the next-level memory of the cache into the reallocated cache line; in this case, the at least one access processing operation includes a third read request and a first write request. In some examples, the at least one access processing operation may further include updated state information corresponding to the reallocated cache lines.
If the reallocation of the cache line is successful and the dirty data mark corresponding to the reallocated cache line is effective, the dirty data mark is effective, and the fact that the data in the reallocated cache line is inconsistent with the data corresponding to the same address information in the next-level memory is required to be stored in the next-level memory, then the data corresponding to the address information indicated by the access request is read from the next-level memory, the state information corresponding to the reallocated cache line is updated, updated state information corresponding to the reallocated cache line is obtained, and a fourth read request, a second write request, a third read request and a first write request are generated; wherein the fourth read request is for reading data from the reallocated cache line; the second write request is for writing data read from the reassigned cache line to a next level of memory of the cache. In this case, the at least one access processing operation includes a fourth read request, a second write request, a third read request, and a first write request. In some examples, the at least one access processing operation may further include updated state information corresponding to the reallocated cache lines.
In the third case, the access request is used for writing the data block to be written into the second address information; the first test result is a hit result, and the generation of at least one access processing operation corresponding to the access request based on the first test result can be realized by the following ways: updating the state information corresponding to the hit cache line based on the hit result to obtain updated state information corresponding to the hit cache line, and generating a third write request, wherein the third write request is used for writing the data block to be written corresponding to the second address information into the hit cache line. In this case, the at least one access processing operation includes a third write request. In some examples, the at least one access processing operation may further include updated state information corresponding to the hit cache line.
In the fourth case, the access request is used for writing the data block to be written into the second address information; the first test result is a miss result, and the generation of at least one access processing operation corresponding to the access request based on the first test result can be achieved by:
Reallocating the cache line for the missed access request based on the miss result;
if the reallocation of the cache line fails, generating a fourth write request; the fourth writing request is used for writing a data block to be written corresponding to the second address information into a next-level memory of the cache; in this case, the at least one access processing operation includes a fourth write request.
If the reallocation of the cache line is successful, the data block to be written is all valid, and the dirty data mark corresponding to the reallocated cache line is invalid, updating the state information corresponding to the reallocated cache line to obtain updated state information corresponding to the reallocated cache line, and generating a fifth write request; the fifth writing request is used for writing the data block to be written into the reassigned cache line; in this case, the at least one access processing operation includes a fifth write request. In some examples, the at least one access processing operation may further include updated state information corresponding to the reallocated cache lines.
If the reallocation of the cache line is successful, all the data blocks to be written are valid, and the dirty data mark corresponding to the reallocated cache line is valid, updating the state information corresponding to the reallocated cache line to obtain updated state information corresponding to the reallocated cache line, and generating a fifth read request, a sixth write request and a fifth write request; wherein the fifth read request is for reading data from the reallocated cache line; the sixth write request is for writing the data read from the reallocated cache line into the next level of memory of the cache; in this case, the at least one access processing operation includes a fifth read request, a sixth write request, and a fifth write request. In some examples, the at least one access processing operation may further include updated state information corresponding to the reallocated cache lines.
If the reallocation of the cache line is successful, the data block to be written is partially valid, and the dirty data mark corresponding to the reallocated cache line is invalid, updating the state information corresponding to the reallocated cache line to obtain updated state information corresponding to the reallocated cache line, and generating a sixth read request and a seventh write request; the sixth read request is used for reading data corresponding to the second address information from a next-level memory of the cache; the seventh write request is used for splicing the data read from the next-level memory of the cache and corresponding to the second address information with the data block to be written and then writing the spliced data into the redistributed cache line; in this case, the at least one access processing operation includes a sixth read request and a seventh write request. In some examples, the at least one access processing operation may further include updated state information corresponding to the reallocated cache lines.
If the reallocation of the cache line is successful, the data block to be written is partially valid, and the dirty data flag corresponding to the reallocated cache line is valid, updating the state information corresponding to the reallocated cache line to obtain updated state information corresponding to the reallocated cache line, and generating a fifth read request, a sixth write request, a sixth read request and a seventh write request. In this case, the at least one access processing operation includes a fifth read request, a sixth write request, a sixth read request, and a seventh write request. In some examples, the at least one access processing operation may further include updated state information corresponding to the reallocated cache lines.
Based on the first embodiment, the second execution result may include the first test result in addition to at least one access processing operation.
In the embodiment of the present application, the access request may also be a calculation request, and the processing procedure of the calculation request may refer to the processing procedure of the access request for writing the data block to be written into the second address information.
In the second embodiment, the access request is a flushing request; the second execution result is obtained by processing the access request by the SV reference model, and can be realized by the following steps:
If the dirty data mark of the cache line exists in the state information corresponding to each cache line in the cache maintained by the SV reference model, updating the state information of the cache line with the valid dirty data mark to obtain updated state information corresponding to the cache line with the valid dirty data mark, and generating a seventh read request and an eighth write request, wherein the seventh read request is used for reading the data in the cache line with the valid dirty data mark, and the eighth write request is used for writing the data read from the cache line with the valid dirty data mark into a next-level memory of the cache; the second execution result at least includes a seventh read request and an eighth write request corresponding to a cache line for which each dirty data flag is valid. In some examples, the second execution result may further include updated state information corresponding to the cache line for which each dirty data flag is valid.
In the above embodiment, the first RTL design is taken as the design to be tested, and the first RTL design can implement the access processing function inside the hit-test module as described by way of example, and the first RTL design is verified by the SV reference model, so as to implement verification of the access processing function inside the hit-test module.
In another possible implementation manner, the second RTL design may be constructed based on the timing circuit corresponding to the hit-test module, and the second RTL design is used as the design to be tested, where the second RTL design may implement all functions of the hit-test module, and the achievable access processing function in the second RTL design is verified through the SV reference model, where the sub-design of the achievable access processing function in the second RTL design is consistent with the foregoing function implemented by the first RTL design, so it may be understood that the second RTL design includes the first RTL design, and verifying the function of the achievable access processing in the second RTL design is to verify the first RTL design inside the second RTL design.
Specifically, an access request may be input to a second RTL design and the input of the first RTL design monitored; when an access request is monitored at an input of a first RTL design, the monitored access request is input to an SV reference model. And then, acquiring a first execution result output by the first RTL design and a second execution result output by the SV reference model, and comparing the first execution result with the second execution result one by one to obtain a verification result of the hit test module.
In addition to the access processing function, other functions may be implemented in the hit-test module, and then the second RTL design may further include sub-designs corresponding to the other functions. Illustratively, taking the second RTL design including three sub-designs, which are the first RTL design, the third RTL design, and the fourth RTL design shown in fig. 5 as examples, the first RTL design may be verified by the verification system shown in fig. 5, and the functions implemented by the third RTL design and the fourth RTL design are not limited by the present application.
Included in the verification system shown in fig. 5 are driver, monitor, monitor2, SV reference models, and scoreboard. The difference from fig. 4 is that: the driver in fig. 5 is configured to apply different stimulus to the second RTL design, where in the embodiment of the present application, the stimulus applied to the first RTL design may be an access request, and a type of the access request is, for example, a read request, a write request, a flush request, a calculation request, or the like.
After the second RTL design receives the input access request, the access request is processed by the second RTL design before entering the first RTL design. Therefore, when verifying the first RTL design in the second RTL design, monitor1 is required to monitor the input end of the first RTL design, monitor2 monitors the output end of the first RTL design, so as to ensure that the SV reference model is consistent with the input content of the first RTL design, and the input timing of the access request to the SV reference model and the input timing of the first RTL design can be aligned. Furthermore, monitor2 may monitor the first execution result output by the first RTL design, and send the monitored first execution result to the comparator. The relevant implementation of the comparator and the SV reference model can be seen from the description of fig. 4, and will not be repeated here.
Based on the same technical concept, the embodiment of the application provides a verification device for a cache, wherein the cache comprises a hit test module. As shown in fig. 6, the authentication apparatus 600 includes:
The construction unit 601 is configured to construct an SV reference model, where the SV reference model is obtained by modeling based on a sequential logic corresponding to an access processing function in the SV language simulation hit test module;
an input unit 602, configured to input an access request to a first RTL design and the SV reference model, where the first RTL design is an RTL description of a timing circuit corresponding to an access processing function in the hit test module;
The monitoring unit 603 is configured to obtain a first execution result output by the first RTL design and a second execution result output by the SV reference model, and send the first execution result and the second execution result to the verification unit; the first execution result is obtained by processing the access request by the first RTL design, and the second execution result is obtained by processing the access request by the SV reference model;
And the verification unit 604 is configured to compare the first execution result with the second execution result piece by piece, so as to obtain a verification result of the hit test module.
Optionally, the building unit 601 is further configured to build a second RTL design corresponding to the hit test module, where the second RTL design includes the first RTL design; an input unit 602, configured to input an access request to the second RTL design, and monitor an input terminal of the first RTL design; when the access request is monitored at the input of the first RTL design, the monitored access request is input to the SV reference model.
Optionally, the verification device 600 further comprises a processing unit for: performing cache line hit testing according to the address information indicated by the access request and the state information corresponding to each cache line in the cache maintained by the SV reference model to obtain a first test result; generating at least one access processing operation corresponding to the access request based on the first test result; the second execution result includes the at least one access processing operation.
Optionally, the at least one access processing operation includes at least one of:
updated state information corresponding to the hit cache line;
A request to access the hit cache line;
A cache line reassigned to the access request;
Updated state information corresponding to the redistributed cache line;
A request to access the reallocated cache line;
a request to access a next level of memory of the cache.
Optionally, the second execution result further includes the first test result.
Optionally, the access request is used for reading data corresponding to the first address information, the first test result is a hit result, and the processing unit 605 is specifically configured to: updating state information corresponding to the hit cache line based on the hit result to obtain updated state information corresponding to the hit cache line, and generating a first read request, wherein the first read request is used for reading data corresponding to the first address information from the hit cache line.
Optionally, the access request is used for reading data corresponding to the first address information, the first test result is a miss result, and the processing unit 605 is specifically configured to:
Reallocating a cache line for the missed access request based on the miss result;
If the reallocation of the cache line fails, generating a second read request; the second read request is used for reading data corresponding to the first address information from a next-level memory of the cache; or alternatively
If the cache line is successfully redistributed and the dirty data mark corresponding to the redistributed cache line is invalid, updating the state information corresponding to the redistributed cache line to obtain updated state information corresponding to the redistributed cache line, and generating a third read request and a first write request; the third read request is used for reading data corresponding to the first address information from a next-level memory of the cache, and the first write request is used for writing the data corresponding to the first address information read from the next-level memory of the cache into the reallocated cache line; or alternatively
If the cache line reassignment is successful and the dirty data mark corresponding to the cache line reassignment is valid, updating the state information corresponding to the cache line reassignment to obtain updated state information corresponding to the cache line reassignment, and generating a fourth read request, a second write request, the third read request and the first write request; wherein the fourth read request is for reading data from the reallocated cache line; the second write request is for writing data read from the reallocated cache line into a next level of memory of the cache.
Optionally, the access request is used for writing the data block to be written to the second address information; the first test result is a hit result, and the processing unit 605 is specifically configured to: updating the state information corresponding to the hit cache line based on the hit result to obtain updated state information corresponding to the hit cache line, and generating a third write request, wherein the third write request is used for writing the data block to be written corresponding to the second address information into the hit cache line.
Optionally, the access request is used for writing the data block to be written to the second address information; the first test result is a miss result, and the processing unit 605 is specifically configured to: reallocating a cache line for the missed access request based on the miss result;
if the reallocation of the cache line fails, generating a fourth write request; the fourth writing request is used for writing a data block to be written corresponding to the second address information into a next-level memory of the cache; or alternatively
If the cache line is successfully redistributed, the data blocks to be written are all valid, and dirty data marks corresponding to the redistributed cache line are invalid, updating state information corresponding to the redistributed cache line to obtain updated state information corresponding to the redistributed cache line, and generating a fifth writing request; wherein the fifth write request is for writing the data block to be written into the reallocated cache line; or alternatively
If the cache line is successfully redistributed, all the data blocks to be written are valid, and dirty data marks corresponding to the redistributed cache line are valid, updating state information corresponding to the redistributed cache line to obtain updated state information corresponding to the redistributed cache line, and generating a fifth read request, a sixth write request and the fifth write request; wherein the fifth read request is for reading data from the reallocated cache line; the sixth write request is for writing data read from the reallocated cache line into a next level of memory of the cache; or alternatively
If the reallocation of the cache line is successful, the data block to be written is partially valid, and the dirty data mark corresponding to the reallocated cache line is invalid, updating the state information corresponding to the reallocated cache line to obtain updated state information corresponding to the reallocated cache line, and generating a sixth read request and a seventh write request; wherein the sixth read request is used for reading data corresponding to the second address information from a next-level memory of the cache; the seventh write request is used for writing the data corresponding to the second address information read from the next-level memory of the cache into the reallocated cache line after splicing the data block to be written; or alternatively
And if the reallocation of the cache line is successful, the data block to be written is partially valid, and the dirty data mark corresponding to the reallocated cache line is valid, updating the state information corresponding to the reallocated cache line to obtain updated state information corresponding to the reallocated cache line, and generating the fifth read request, the sixth write request, the sixth read request and the seventh write request.
Optionally, the access request is a flush request; the processing unit 605 is specifically configured to: if the dirty data mark of the cache line exists in the state information corresponding to each cache line in the cache maintained by the SV reference model, updating the state information of the cache line with each dirty data mark to obtain updated state information corresponding to each cache line with each dirty data mark, and generating a seventh read request and an eighth write request, wherein the seventh read request is used for reading the data in the cache line with the dirty data mark, and the eighth write request is used for writing the data read from the cache line with the dirty data mark into a next-level memory of the cache; the second execution result at least includes the seventh read request and the eighth write request corresponding to the cache line for which each dirty data flag is valid.
It should be noted that, in the embodiment of the present application, the division of the units is schematic, which is merely a logic function division, and other division manners may be implemented in actual practice. The functional units in the embodiments of the present application may be integrated in one verification unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The methods described above may be implemented in whole or in part by software, hardware, firmware, or any other combination. When implemented in software, the methods described above may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded or executed on a computer, the processes or functions in accordance with embodiments of the present invention are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.) means from one website, computer, server, or data center. Computer readable storage media can be any available media that can be accessed by a computer or data storage devices, such as servers, data centers, etc. that contain one or more collections of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a Solid State Disk (SSD) STATE DRIVE.
In a simple embodiment, one skilled in the art will recognize that the cached verification means in an embodiment may be an electronic device, which may take the form shown in FIG. 7.
The electronic device 700 as shown in fig. 7 comprises at least one processor 701, a memory 702 and optionally a communication interface 703.
Memory 702 may be a volatile memory, such as a random access memory; the memory may also be a non-volatile memory such as, but not limited to, read-only memory, flash memory, hard disk (HARD DISK DRIVE, HDD) or Solid State Disk (SSD), or the memory 702 may be any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 702 may be a combination of the above.
The specific connection medium between the processor 701 and the memory 702 is not limited in the embodiment of the present application.
The processor 701 may be a GPU, and the processor 701 may also be other general purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field programmable gate arrays (field programmable GATE ARRAY, FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, artificial intelligence chips, chip-on-chip, etc. A general purpose processor may be a microprocessor or any conventional processor or the like. In the electronic device as in fig. 7, a separate data transceiver module, such as a communication interface 703, may also be provided for transceiving data; the processor 701 may communicate with other devices by data transmission via the communication interface 703.
In one possible application scenario, the electronic device takes the form shown in fig. 7, and the processor 701 in fig. 7 may cause the electronic device to perform the method of any of the method embodiments described above by invoking computer-executable instructions stored in the memory 702.
Based on the same technical concept, the embodiments of the present application provide a computer-readable storage medium including computer-executable instructions for causing a computer to perform the method of any one of the above-described method embodiments.
Based on the same technical idea, an embodiment of the present application provides a computer program product, which stores a computer program comprising program instructions, which when executed by a computer, cause the computer to perform the method of any of the above-mentioned method embodiments.
It will be apparent to those skilled in the art that embodiments of the present application may be provided as a method, apparatus (device), system, chip, computer readable storage medium, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects all generally referred to herein as a "module" or "system.
The present application is described with reference to at least one of the following figures of the method, apparatus (device) or system of the present application: flow chart, block diagram. It should be understood that at least one of the following may be implemented by computer program instructions: each flow in the flow diagrams, each block in the block diagrams, a combination of blocks in the flow diagrams and blocks in the block diagrams.
These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, produce a signal for implementing at least one of the following: means for performing the function specified in the flowchart flow or flows, means for performing the function specified in the block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement at least one of the following: functions specified in the flowchart flow or flows, block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing at least one of the following: a step of a function specified in one or more of the flowcharts, a step of a function specified in one or more of the blocks in the block diagrams.
Although the invention has been described in connection with specific features and embodiments thereof, it will be apparent that various modifications and combinations can be made without departing from the spirit and scope of the invention. Accordingly, the specification and drawings are merely exemplary illustrations of the present invention as defined in the appended claims and are considered to cover any and all modifications, variations, combinations, or equivalents that fall within the scope of the invention. It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (13)

1. A method of cache validation, wherein the cache includes a hit test module, the method comprising:
Constructing an SV reference model, wherein the SV reference model is obtained by modeling based on sequential logic corresponding to an access processing function in the SV language simulation hit test module;
respectively inputting an access request to a first RTL design and the SV reference model, wherein the first RTL design is an RTL description of a time sequence circuit corresponding to an access processing function in the hit test module;
Acquiring a first execution result output by the first RTL design and a second execution result output by the SV reference model; the first execution result is obtained by processing the access request by the first RTL design, and the second execution result is obtained by processing the access request by the SV reference model, including: if the access request is a read request or a write request, performing cache line hit test according to address information indicated by the access request and state information corresponding to each cache line in the cache maintained by the SV reference model to obtain a first test result; generating at least one access processing operation corresponding to the access request based on the first test result; the second execution result includes the at least one access processing operation;
and comparing the first execution result with the second execution result one by one to obtain a verification result of the hit test module.
2. The method of claim 1, wherein prior to entering access requests to the first RTL design and the SV reference model, respectively, further comprising:
constructing a second RTL design corresponding to the hit test module, wherein the second RTL design comprises the first RTL design;
The inputting access requests to the first RTL design and the SV reference model, respectively, includes:
inputting an access request to the second RTL design and monitoring an input of the first RTL design;
when the access request is monitored at the input of the first RTL design, the monitored access request is input to the SV reference model.
3. The method of claim 1 or 2, wherein the at least one access processing operation comprises at least one of:
updated state information corresponding to the hit cache line;
A request to access the hit cache line;
A cache line reassigned to the access request;
Updated state information corresponding to the redistributed cache line;
A request to access the reallocated cache line;
a request to access a next level of memory of the cache.
4. The method of claim 1 or 2, wherein the second execution result further comprises the first test result.
5. The method according to claim 1 or 2, wherein the access request is used for reading data corresponding to first address information, the first test result is a hit result, and the generating at least one access processing operation corresponding to the access request based on the first test result includes:
Updating state information corresponding to the hit cache line based on the hit result to obtain updated state information corresponding to the hit cache line, and generating a first read request, wherein the first read request is used for reading data corresponding to the first address information from the hit cache line.
6. The method according to claim 1 or 2, wherein the access request is used for reading data corresponding to first address information, the first test result is a miss result, and the generating at least one access processing operation corresponding to the access request based on the first test result includes:
reallocating a cache line for the access request for a missed cache line based on the miss result;
If the reallocation of the cache line fails, generating a second read request; the second read request is used for reading data corresponding to the first address information from a next-level memory of the cache; or alternatively
If the cache line is successfully redistributed and the dirty data mark corresponding to the redistributed cache line is invalid, updating the state information corresponding to the redistributed cache line to obtain updated state information corresponding to the redistributed cache line, and generating a third read request and a first write request; the third read request is used for reading data corresponding to the first address information from a next-level memory of the cache, and the first write request is used for writing the data corresponding to the first address information read from the next-level memory of the cache into the reallocated cache line; or alternatively
If the cache line reassignment is successful and the dirty data mark corresponding to the cache line reassignment is valid, updating the state information corresponding to the cache line reassignment to obtain updated state information corresponding to the cache line reassignment, and generating a fourth read request, a second write request, the third read request and the first write request; wherein the fourth read request is for reading data from the reallocated cache line; the second write request is for writing data read from the reallocated cache line into a next level of memory of the cache.
7. A method according to claim 1 or 2, wherein the access request is for writing a data block to be written to the second address information; the first test result is a hit result, and the generating at least one access processing operation corresponding to the access request based on the first test result includes:
Updating the state information corresponding to the hit cache line based on the hit result to obtain updated state information corresponding to the hit cache line, and generating a third write request, wherein the third write request is used for writing the data block to be written corresponding to the second address information into the hit cache line.
8. A method according to claim 1 or 2, wherein the access request is for writing a data block to be written to the second address information; the first test result is a miss result, and the generating at least one access processing operation corresponding to the access request based on the first test result includes:
Reallocating a cache line for the missed access request based on the miss result;
if the reallocation of the cache line fails, generating a fourth write request; the fourth writing request is used for writing a data block to be written corresponding to the second address information into a next-level memory of the cache; or alternatively
If the cache line is successfully redistributed, the data blocks to be written are all valid, and dirty data marks corresponding to the redistributed cache line are invalid, updating state information corresponding to the redistributed cache line to obtain updated state information corresponding to the redistributed cache line, and generating a fifth writing request; wherein the fifth write request is for writing the data block to be written into the reallocated cache line; or alternatively
If the cache line is successfully redistributed, all the data blocks to be written are valid, and dirty data marks corresponding to the redistributed cache line are valid, updating state information corresponding to the redistributed cache line to obtain updated state information corresponding to the redistributed cache line, and generating a fifth read request, a sixth write request and the fifth write request; wherein the fifth read request is for reading data from the reallocated cache line; the sixth write request is for writing data read from the reallocated cache line into a next level of memory of the cache; or alternatively
If the reallocation of the cache line is successful, the data block to be written is partially valid, and the dirty data mark corresponding to the reallocated cache line is invalid, updating the state information corresponding to the reallocated cache line to obtain updated state information corresponding to the reallocated cache line, and generating a sixth read request and a seventh write request; the sixth read request is used for reading data corresponding to the second address information from a next-level memory of the cache; the seventh write request is used for writing the data corresponding to the second address information read from the next-level memory of the cache into the reallocated cache line after splicing the data block to be written; or alternatively
And if the reallocation of the cache line is successful, the data block to be written is partially valid, and the dirty data mark corresponding to the reallocated cache line is valid, updating the state information corresponding to the reallocated cache line to obtain updated state information corresponding to the reallocated cache line, and generating the fifth read request, the sixth write request, the sixth read request and the seventh write request.
9. The method of claim 1 or 2, wherein the second execution result is a result of processing the access request by the SV reference model, further comprising:
If the access request is a flushing request and dirty data marks of cache lines exist in state information corresponding to each cache line in the cache maintained by the SV reference model, updating the state information for each dirty data mark to obtain updated state information corresponding to each dirty data mark valid cache line, and generating a seventh read request and an eighth write request, wherein the seventh read request is used for reading data in the dirty data mark valid cache line, and the eighth write request is used for writing the data read from the dirty data mark valid cache line into a next-level memory of the cache; the second execution result at least includes the seventh read request and the eighth write request corresponding to the cache line for which each dirty data flag is valid.
10. A cache validation apparatus, wherein the cache includes a hit test module, the validation apparatus comprising:
The construction unit is used for constructing an SV reference model, and the SV reference model is obtained by modeling based on sequential logic corresponding to access processing functions in the SV language simulation hit test module;
The input unit is used for inputting access requests to a first RTL design and the SV reference model respectively, wherein the first RTL design is an RTL description of a time sequence circuit corresponding to an access processing function in the hit test module;
The monitoring unit is used for acquiring a first execution result output by the first RTL design and a second execution result output by the SV reference model, and sending the first execution result and the second execution result to the verification unit; the first execution result is obtained by processing the access request by the first RTL design, and the second execution result is obtained by processing the access request by the SV reference model, including: if the access request is a read request or a write request, performing cache line hit test according to address information indicated by the access request and state information corresponding to each cache line in the cache maintained by the SV reference model to obtain a first test result; generating at least one access processing operation corresponding to the access request based on the first test result; the second execution result includes the at least one access processing operation;
and the verification unit is used for comparing the first execution result with the second execution result one by one to obtain a verification result of the hit test module.
11. An electronic device comprising a processor and a memory;
The memory is used for storing program instructions and data;
the processor is configured to invoke program instructions and data in the memory to perform the method of any of claims 1 to 9.
12. A computer readable storage medium comprising computer executable instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1 to 9.
13. A computer program product, characterized in that the computer program product stores a computer program comprising program instructions which, when executed by a computer, cause the computer to perform the method of any of claims 1 to 9.
CN202410446204.1A 2024-04-12 2024-04-12 Cache verification method, device, equipment, medium and program product Active CN118035022B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410446204.1A CN118035022B (en) 2024-04-12 2024-04-12 Cache verification method, device, equipment, medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410446204.1A CN118035022B (en) 2024-04-12 2024-04-12 Cache verification method, device, equipment, medium and program product

Publications (2)

Publication Number Publication Date
CN118035022A CN118035022A (en) 2024-05-14
CN118035022B true CN118035022B (en) 2024-07-09

Family

ID=90993607

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410446204.1A Active CN118035022B (en) 2024-04-12 2024-04-12 Cache verification method, device, equipment, medium and program product

Country Status (1)

Country Link
CN (1) CN118035022B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117234591A (en) * 2023-09-04 2023-12-15 上海合芯数字科技有限公司 Instruction verification method, system, equipment, medium and product

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11275582B2 (en) * 2017-01-06 2022-03-15 Montana Systems Inc. Event-driven design simulation
CN113297073B (en) * 2021-05-20 2022-07-29 山东云海国创云计算装备产业创新中心有限公司 Verification method, device and equipment of algorithm module in chip and readable storage medium
CN113486625B (en) * 2021-06-29 2022-05-06 海光信息技术股份有限公司 Chip verification method and verification system
CN115562982A (en) * 2022-09-28 2023-01-03 平头哥(上海)半导体技术有限公司 Reference model debugging method and device, electronic equipment and storage medium
CN117521568A (en) * 2023-11-28 2024-02-06 山东云海国创云计算装备产业创新中心有限公司 Reference model generation method, device, equipment and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117234591A (en) * 2023-09-04 2023-12-15 上海合芯数字科技有限公司 Instruction verification method, system, equipment, medium and product

Also Published As

Publication number Publication date
CN118035022A (en) 2024-05-14

Similar Documents

Publication Publication Date Title
CN115130402B (en) Cache verification method, system, electronic equipment and readable storage medium
RU2430409C2 (en) Method of measuring coverage in interconnection structural condition
US7000079B2 (en) Method and apparatus for verification of coherence for shared cache components in a system verification environment
US9501408B2 (en) Efficient validation of coherency between processor cores and accelerators in computer systems
CN113779912B (en) Chip verification system, method and device, electronic equipment and storage medium
US20210349815A1 (en) Automatically introducing register dependencies to tests
US11061821B2 (en) Method, system, and apparatus for stress testing memory translation tables
CN112597718A (en) Verification method, verification device and storage medium for integrated circuit design
CN116167310A (en) Method and device for verifying cache consistency of multi-core processor
CN114168200B (en) System and method for verifying memory access consistency of multi-core processor
CN117785292B (en) Verification method and verification device for cache consistency of multi-core processor system
CN117076330B (en) Access verification method, system, electronic equipment and readable storage medium
US9646252B2 (en) Template clauses based SAT techniques
CN118035022B (en) Cache verification method, device, equipment, medium and program product
US10007746B1 (en) Method and system for generalized next-state-directed constrained random simulation
US20090265534A1 (en) Fairness, Performance, and Livelock Assessment Using a Loop Manager With Comparative Parallel Looping
US9003364B2 (en) Overriding system attributes and function returns in a software subsystem
CN111858307B (en) Fuzzy test method and equipment
EP3734491A1 (en) Method, apparatus, device, and medium for implementing simulator
De Paula et al. An efficient rewriting framework for trace coverage of symmetric systems
CN112650679B (en) Test verification method, device and computer system
De Paula et al. Rewriting toward trace coverage analysis of symmetric systems
CN117236241A (en) Verification method and device of TCAM packaging module and computing equipment
WO2024102150A1 (en) Determining internet protocol (ip) addresses for scanning in wireless network
CN115840593A (en) Method and device for verifying execution component in processor, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant