CN114706531A - Data processing method, device, chip, equipment and medium - Google Patents

Data processing method, device, chip, equipment and medium Download PDF

Info

Publication number
CN114706531A
CN114706531A CN202210399870.5A CN202210399870A CN114706531A CN 114706531 A CN114706531 A CN 114706531A CN 202210399870 A CN202210399870 A CN 202210399870A CN 114706531 A CN114706531 A CN 114706531A
Authority
CN
China
Prior art keywords
block
write
read
write data
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210399870.5A
Other languages
Chinese (zh)
Inventor
姜涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hanbo Semiconductor Shanghai Co ltd
Original Assignee
Hanbo Semiconductor Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hanbo Semiconductor Shanghai Co ltd filed Critical Hanbo Semiconductor Shanghai Co ltd
Priority to CN202210399870.5A priority Critical patent/CN114706531A/en
Publication of CN114706531A publication Critical patent/CN114706531A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

A data processing method, device, chip, equipment and medium are provided, which relate to the technical field of data storage, in particular to the technical field of chips. The implementation scheme is as follows: in response to receiving a first read request and a first write request for a first one-port memory block of the plurality of one-port memory blocks in a first clock cycle, performing a read operation for the first one-port memory block according to the first read request, wherein the first write request includes first write data and a first write address; sending a caching request for first write data to a shared cache block so as to cache the first write data to a target position in the shared cache block, wherein the target position is a storage position corresponding to a first write address of a first single-port storage block in the shared cache block; and in response to receiving a cache request for second write data for a target location of the shared cache block in a second clock cycle, performing an operation to transfer the first write data to the first single-port memory block prior to caching the second write data to the shared cache block.

Description

Data processing method, device, chip, equipment and medium
Technical Field
The present disclosure relates to the field of data storage technologies, and in particular, to the field of chip technologies, and in particular, to a method and an apparatus for data processing, a chip, an electronic device, a computer-readable storage medium, and a computer program product.
Background
Many electronic circuits currently have separate memories. The memory in the electronic circuit includes a single-port memory, a dual-port memory and the like, wherein only one of a read operation and a write operation is allowed to be performed in the same clock cycle for each single-port memory block in the single-port memory, and the read operation and the write operation are not allowed to be performed simultaneously.
Disclosure of Invention
The present disclosure provides a method, an apparatus, a chip, an electronic device, a computer-readable storage medium, and a computer program product for data processing.
According to an aspect of the present disclosure, there is provided a data processing method for a circuit having a memory function, the circuit including a plurality of single-port memory blocks and a shared cache block, the method including: in response to receiving a first read request and a first write request for a first one-port memory block of the plurality of one-port memory blocks in a first clock cycle, performing a read operation for the first one-port memory block according to the first read request, wherein the first write request includes first write data and a first write address; sending a caching request for first write data to a shared cache block so as to cache the first write data to a target position in the shared cache block, wherein the target position is a storage position corresponding to a first write address of a first single-port storage block in the shared cache block; and in response to receiving a cache request for second write data for a target location of the shared cache block in a second clock cycle, performing an operation to transfer the first write data to the first single-ported storage block prior to caching the second write data to the shared cache block.
According to another aspect of the present disclosure, there is provided a data processing apparatus for a circuit having a memory function, the circuit including a plurality of single-port memory blocks and a shared cache block, the apparatus comprising: a first read module configured to, in response to receiving a first read request and a first write request for a first single-ported memory block of the plurality of single-ported memory blocks in a first clock cycle, perform a read operation for the first single-ported memory block in accordance with the first read request, wherein the first write request includes first write data and a first write address; the write-in module is configured to send a cache request for first write data to the shared cache block to cache the first write data to a target position in the shared cache block, where the target position is a storage position in the shared cache block corresponding to a first write address of a first single-port storage block; and a transmission module configured to perform an operation of transmitting the first write data to the first single-ported memory block before buffering the second write data to the shared cache block in response to receiving a cache request for the second write data for the target location of the shared cache block in the second clock cycle.
According to yet another aspect of the present disclosure, there is provided a chip comprising: at least one processor; and a memory having a computer program stored thereon, wherein the computer program, when executed by the processor, causes the processor to perform the above-described method.
According to yet another aspect of the present disclosure, there is provided an electronic device including the chip described above.
According to yet another aspect of the present disclosure, a computer-readable storage medium is provided, having stored thereon a computer program which, when executed by a processor, causes the processor to carry out the above-mentioned method.
According to yet another aspect of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, causes the processor to carry out the above-mentioned method.
According to one or more embodiments of the present disclosure, in the same clock cycle, when a single-port memory block in a circuit receives a read request and a write request at the same time, the circuit can still achieve data reading based on the read request and save write data, and data interaction between the circuit and other processors can be performed normally when a read-write collision occurs, thereby avoiding system performance degradation caused by the read-write collision.
These and other aspects of the disclosure will be apparent from and elucidated with reference to the embodiments described hereinafter.
Drawings
Further details, features and advantages of the disclosure are disclosed in the following description of exemplary embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart illustrating a data processing method according to an exemplary embodiment;
FIG. 2 is a diagram illustrating a data processing method according to an exemplary embodiment;
FIG. 3 is a timing diagram illustrating a data processing method according to an exemplary embodiment;
FIG. 4 is another timing diagram illustrating a data processing method according to an exemplary embodiment;
FIG. 5 is a schematic block diagram illustrating a data processing apparatus according to an example embodiment; and
FIG. 6 is a block diagram illustrating an exemplary electronic device that can be applied to exemplary embodiments.
Detailed Description
In the present disclosure, unless otherwise specified, the use of the terms "first", "second", etc. to describe various elements is not intended to limit the positional relationship, the timing relationship, or the importance relationship of the elements, and such terms are used only to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, based on the context, they may also refer to different instances.
The terminology used in the description of the various examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the elements may be one or more. As used herein, the term "plurality" means two or more, and the term "based on" should be interpreted as "based, at least in part, on". Further, the terms "and/or" and at least one of "… …" encompass any and all possible combinations of the listed items.
Based on the read-write characteristics of the single-port memory, under the condition that a read request and a write request are received simultaneously in the same clock cycle, only one of the requests can be processed. In the related art, in order to overcome the above read-write conflict, read-write ports of a single-port memory block are added, so that data reading is performed by using one read-write port in the same cycle, and data storage is performed by using another read-write port. However, the increase in the number of read/write ports inevitably leads to an increase in circuit area, which is undoubtedly contrary to the current demand for miniaturized design of integrated circuits.
Based on this, the present disclosure provides a data processing method, which preferentially executes a read operation for a first single-ported memory block in response to a read-write collision occurring in the first single-ported memory block in a first clock cycle, and at the same time, caches first write data to be written in the first single-ported memory block in a shared cache block. And executing the operation of transmitting the first write data to the first single-port storage block until the storage position used for storing the first write data in the shared cache block receives the cache request of the second write data in the second clock cycle. Therefore, under the condition that a single-port storage block in the circuit receives a read request and a write request simultaneously in a first clock cycle, the circuit can still realize data reading based on the read request and storage of write data, data interaction between the circuit and other processors can be normally executed under the condition of read-write conflict, and system performance reduction caused by the read-write conflict is avoided.
Exemplary embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
FIG. 1 is a flow chart illustrating a data processing method 100 according to an exemplary embodiment. A data processing method 100 for a circuit having a memory function, the circuit comprising a plurality of single-ported memory blocks and a shared cache block, the method 100 comprising: step S101, in response to receiving a first read request and a first write request aiming at a first single-port storage block in a plurality of single-port storage blocks in a first clock cycle, executing a read operation aiming at the first single-port storage block according to the first read request, wherein the first write request comprises first write data and a first write address; step S102, sending a caching request for first write data to a shared cache block so as to cache the first write data to a target position in the shared cache block, wherein the target position is a storage position corresponding to a first write address of a first single-port storage block in the shared cache block; and step S103, in response to receiving a caching request for the second write data aiming at the target position of the shared cache block in the second clock cycle, before caching the second write data to the shared cache block, executing the operation of transmitting the first write data to the first single-port storage block.
Therefore, when a first read request and a first write request for the first single-port memory block are received in one clock cycle, namely, read-write conflict occurs, the first write data can be cached in the shared cache block, so that data interaction between the circuit and other processors can be normally executed under the condition of the read-write conflict, and the performance reduction of the system caused by the read-write conflict is avoided.
Meanwhile, the shared cache block can realize faster data reading, so that the caching duration of the first write data in the shared cache block is prolonged to the maximum extent, namely, the operation of transmitting the first write data to the first single-port storage block is not executed until the storage position for caching the first write data in the shared cache block receives the caching request of the second write data in the second clock cycle. Therefore, the cache space in the shared cache block can be fully utilized, the reading efficiency of the data stored in the circuit is improved, and meanwhile, the resource overhead caused by unnecessary data transmission between the shared cache block and the first single-port storage block is avoided.
In other words, the present disclosure, under the condition of using a single-port memory block, realizes the function of processing read-write requests simultaneously which is possessed by a plurality of port memory blocks, not only realizes low cost, but also effectively saves processing resources, reduces power consumption, and effectively improves the memory performance of the system without additionally increasing the circuit scale.
The circuit with the memory function may be a chip. Each of the plurality of single-port memory blocks is one Bank (Bank) divided out of an available memory space of the circuit.
According to some embodiments, each of the plurality of single-ported memory blocks is the same size.
In one embodiment, each of the plurality of single-port memory blocks includes the same number of memory cells. In particular, each single-port memory block has the same number of entries (entries) arranged in sequence.
For step S101, a first read request and a first write request for a first single-port memory block are received simultaneously in a first clock cycle, and it may be determined that a read-write collision for the first single-port memory block occurs. In order to avoid system performance degradation caused by the fact that the processor cannot acquire read data in time, the read operation based on the first read request is processed preferentially.
In step S102, for a first write request in which a read-write conflict occurs, a cache request for first write data in the first write request is sent to a shared cache block in a circuit to cache the first write data to a target location in the shared cache block. Thus, in a first clock cycle, first write data that cannot be processed immediately due to a read-write collision may be buffered in a shared buffer block to await processing in a subsequent clock cycle.
According to some embodiments, each of the shared cache block and the plurality of single-port memory blocks includes a same number of a plurality of memory cells, for each of the plurality of single-port memory blocks, the plurality of memory cells in the single-port memory block are in one-to-one correspondence with the plurality of memory cells in the shared cache block, and wherein the first write address indicates a first memory cell of the plurality of memory cells of the first single-port memory block, and the target location is a target memory cell of the plurality of memory cells in the shared cache block corresponding to the first memory cell. Thus, the first write data for the first single-ported memory block can be sequentially cached within the shared cache block.
For example, each of the shared cache block and the plurality of single-port memory blocks includes memory cells entry1 to entry5, and memory cells entry1 to entry5 of each of the plurality of single-port memory blocks correspond to entries 1 to entry5 in the shared cache block, respectively. When a read-write conflict occurs, in response to the first write address indicating that the first write data is written into the entry3 in the first single-port memory block, the first write data may be cached into the entry3 in the shared cache block.
Therefore, even if the read-write conflict occurs successively to the write requests of a plurality of storage units in the same single-port storage block, the shared cache block can cache the write data in the write requests in order, and the reliability of the system during the read-write conflict is ensured to the maximum extent.
According to some embodiments, after caching the first write data at the target location in the shared cache block, a mapping relationship between the target location and the first single-ported storage block is identified.
Therefore, the single-port storage block to which the first write data cached at the target position is written can be determined to be the first single-port storage block based on the mapping relation between the target position and the first single-port storage block, and therefore it can be guaranteed that the first write data can be correctly written into the first single-port storage block in the subsequent clock cycle.
Based on the establishment of the mapping relation, the shared cache block can cache write data for a plurality of single-port storage blocks simultaneously.
In the shared cache block, the write data of multiple single-port memory blocks may be cached in a mixed manner, for example, for the same target location in the shared cache block, the write data of different single-port memory blocks may be cached successively at the target location. The shared cache block does not need to define a storage area for each single-port storage block, so that the cache space in the shared cache block can be fully utilized, and the flexibility of data caching in the shared cache block is improved.
In step S103, when the target location of the shared cache block needs to cache new write data, i.e. the second write data, in the second clock cycle, an operation of transferring the first write data to the first single-ported memory block is performed.
The second clock cycle and the first clock cycle may be two consecutive clock cycles, or two discontinuous clock cycles, which is not limited herein.
According to some embodiments, the second write data is from a second read request and a second write request of a second write request for a second single-ported memory block of the plurality of single-ported memory blocks in a second clock cycle, the second write request includes second write data and a second write address, and the target location is a memory location of the shared cache block corresponding to the second write address of the second single-ported memory block.
It can be seen that, when a read-write collision occurs to the second single-port memory block in the second clock cycle, the read operation to the second single-port memory block may be preferentially performed, and the second write data in the second write request may be cached in the shared cache block.
Because the read-write operations of different single-port memory blocks are not affected with each other, the read operation aiming at the second single-port memory block and the write operation aiming at the first single-port memory block can be executed simultaneously in the second clock period.
According to some embodiments, the second write address indicates a second storage location of the plurality of storage locations of the second single-ported storage block, the target location being a target storage location of the plurality of storage locations of the shared cache block corresponding to the second storage location.
Also taking the above-described shared cache block and each of the plurality of single-port memory blocks as including memory cell entries 1-5 as an example, when a read-write conflict with respect to the second single-port memory block occurs, the second write data with respect to the second single-port memory block is written into the shared cache block. When the first write data is currently cached in the entry3 in the shared cache block and the second write address indicates that the second write data needs to be written into the entry3 in the second single-port memory block, a cache request for the second write data is sent to the target location of the shared cache block, namely, entry 3. The first write data currently stored in the entry3 of the shared cache block is read out before the second write data is cached to the entry3 of the shared cache block.
According to some embodiments, performing the operation of transferring the first write data to the first single-ported memory block comprises: reading first write data from a shared cache block; and writing the first write data into the first single-port memory block.
According to some embodiments, each of the plurality of single-port memory blocks has a corresponding cache bank, and wherein performing the operation of transferring the first write data to the first single-port memory block comprises: reading first write data from a shared cache block; caching the first write data into a first cache bar corresponding to the first single-port storage block; in response to receiving a cache request for third write data for the first cache bank in a third clock cycle, reading the first write data from the first cache bank prior to caching the third write data to the first cache bank; and writing the first write data into the first single-port memory block.
In the high-speed memory, it is difficult to complete both the reading of the first write data from the shared cache block and the writing of the first write data to the first single-ported memory block in one clock cycle. To meet the timing requirements of the cache memory, a respective cache bar is set for each of a plurality of single-ported memory blocks. The first write data is buffered by the first buffer strip, and the operation of transmitting the first write data to the first single-port memory block can be completed in two or more clock cycles.
Similar to the shared cache block, the first cache bank also supports efficient data reads. Therefore, in order to improve the data reading efficiency of the system and avoid unnecessary data transmission, the buffering duration of the first write data in the first buffer bank may be prolonged to the maximum extent, that is, when a buffering request for the third write data is received in the third clock cycle for the first buffer bank, the first write data in the first buffer bank is read out from the first buffer bank and is written into the first single-port storage block.
According to some embodiments, after the first write data is cached in the first cache bank corresponding to the first single-port storage block, a mapping relationship between the first cache bank and the first write address is identified.
Since each of the plurality of single-ported memory blocks has a respective cache bank, for a first write data cached in a first cache bank, it may be determined that the first write data is to be written in the first single-ported memory block. Therefore, only the mapping relation between the first cache bank and the first write address in the first single-port storage block needs to be identified, and it can be ensured that the first write data can be correctly written into the storage position indicated by the first write address in the first single-port storage block in the third clock cycle.
FIG. 2 is a schematic diagram of a data processing method according to an exemplary embodiment of the disclosure, wherein a plurality of single-port memory blocks 261-265 are included in the circuit shown in FIG. 2, and the plurality of single-port memory blocks 261-265 have corresponding cache banks 251-255, respectively.
In a first clock cycle, a first read request 201 and a first write request 202 are received. In response to the determination that the read-write collision determining module 210 determines that both the first read request 201 and the first write request 202 are directed to the same single-port storage block, it is determined that a read-write collision occurs, and otherwise, it is determined that no read-write collision occurs.
When no read-write conflict occurs, the read operation may be simultaneously performed according to the first read request and the write operation may be performed according to the first write request in the first clock cycle.
When read-write conflict occurs, the read operation is executed according to the first read request only in the first clock cycle. The following description will take the first read request 201 and the first write request 202 both for the single-port memory block 262 as an example.
The first read address in the first read request 201 is sent to the single-ported memory block selection module 220. Subsequently, the single-port memory block selection module 220 sends the first read address to the single-port memory block 262 to read the first read data 203 to be read by the first read request from the single-port memory block 262, and the first read data 203 is output through the read data control module 270.
The read-write conflict determining module 210 inputs the first write address of the first write request 202 into the buffer control module 240, and the first write data 204 is buffered in the shared buffer block 230 through the write data control module 250. The first write address indicates to write the first write data 204 into the entry2 in the single-port memory block 262, the buffer control module 240 controls the first write data 204 to be buffered into the entry2 of the shared buffer block 230 according to the first write address, and meanwhile, the buffer control module 240 identifies a mapping relationship between the entry2 and the single-port memory block 262.
In the second clock cycle, in response to the read-write conflict determination module 210 determining that the read-write conflict occurs between the second read request and the second write request, and the shared cache block 230 receives a cache request of the second write data for the entry2, the first write data 204 is read from the shared cache block 230 and cached in the cache bar 252 corresponding to the single-port memory block 262. Meanwhile, the mapping relationship between the first write data 204 cached in the cache bar 252 and the entry2 is marked.
In response to the cache bar 252 receiving the cache request of the third write data in the third clock cycle, the first write data 204 is read from the cache bar 252 and the first write data 204 is written into the entry2 of the single-port memory block 262.
According to some embodiments, in response to receiving a third read request for a third single-port storage block of the plurality of single-port storage blocks in a fourth clock cycle, determining whether data to be read by the third read request is stored in a third cache bank corresponding to the shared cache block and the third single-port storage block; and responding to the data to be read by the third read request stored in any one of the shared cache block and the third cache bar, and reading the data from the shared cache block or the third cache bar in which the data is stored.
Since data to be read by a third read request for a third single-port memory block may be cached in the shared cache block or the third cache bank, the data cached in the two is detected first, and if the data to be read by the third read request can be hit, the data can be efficiently read from the two.
According to some embodiments, in response to neither the shared cache block nor the third cache bank storing data to be read by the third read request, the data may be read from the third single-port memory block.
Fig. 3 illustrates a timing diagram for data processing according to an exemplary embodiment of the present disclosure.
In fig. 3, 301 is a write address timing, 302 is a write data timing, 303 is a read address timing, 304 is a read data timing, 305 is a read/write collision timing, 306 is a shared buffer block write address timing, 307 is a shared buffer block write data timing, 308 is a shared buffer block read address timing, and 309 is a shared buffer block read data timing.
As shown in fig. 3, during clock cycles 2 to 4, the write data 0 × EA (corresponding address 100), 0 × EB (corresponding address 101), and 0 × EC (corresponding address 102) for the first single-port memory block are received in this order, and since no read/write collision occurs during this period, the write operations for the write data 0 × EA (corresponding address 100), 0 × EB (corresponding address 101), and 0 × EC (corresponding address 102) can be normally executed.
During the clock cycles 5-7, a read request and a write request for the first single-port memory block are received simultaneously, that is, write data 0 × ED (corresponding address 103), 0 × EE (corresponding address 104), and 0 × EF (corresponding address 105) are received in sequence, and read requests for the address 100, the address 101, and the address 102 of the first single-port memory block are received, during which a read-write conflict occurs, and the read-write conflict timing 305 is pulled high. Due to the occurrence of read-write conflicts, write data 0 × ED (corresponding address 103), 0 × EE (corresponding address 104), and 0 × EF (corresponding address 105) are cached in the shared cache block.
In clock cycle 8, a read request to address 103 of the first single-port memory block is received, data cached in the shared cache block is preferentially retrieved, write data 0 × ED (corresponding address 103) cached in the shared cache block is retrieved, and the write data 0 × ED (corresponding address 103) is read from the shared cache block.
In clock cycles 9 and 10, read requests are received for addresses 104 and 105 of the first single-ported memory block, and write data 0 × EE (corresponding to address 104) and 0 × EF (corresponding to address 105) are read from the shared cache block, in a manner similar to clock cycle 8.
Fig. 4 illustrates another data processing timing diagram according to an exemplary embodiment of the present disclosure.
In fig. 4, 401 is a write address timing, 402 is a write data timing, 403 is a read address timing, 404 is a read data timing, and 405 is a read/write collision timing. Shared cache block write address timing 406, shared cache block write data timing 407, shared cache block read address timing 408, and shared cache block read data timing 409. The block address timing sequence of the first single-port memory block is 410, the data reading timing sequence of the first single-port memory block is 411, and the data writing timing sequence of the first single-port memory block is 412. The address timing sequence of the cache bar writing of the first single-port memory block is 413, the data timing sequence of the cache bar writing of the first single-port memory block is 414, the address timing sequence of the cache bar reading of the first single-port memory block is 415, and the data timing sequence of the cache bar reading of the first single-port memory block is 416. The address timing of the second single-port memory block is 417, the data reading timing of the second single-port memory block is 418, and the data writing timing of the second single-port memory block is 419. The address timing sequence 420 is a cache bar write address timing sequence of the second single-port memory block, the data timing sequence 421 is a cache bar write data timing sequence of the second single-port memory block, the address timing sequence 422 is a cache bar read address timing sequence of the second single-port memory block, and the data timing sequence 423 is a cache bar read data timing sequence of the second single-port memory block.
During clock cycles 2-5, write data 0 × EA (corresponding address 100), 0 × EB (corresponding address 101), 0 × EC (corresponding address 102), and 0 × ED (corresponding address 103) for the first single-port memory block are received in sequence. During clock cycles 6-9, write data for the second single-ported memory block, 0 × FA (corresponding address 200), 0 × FB (corresponding address 201), 0 × FC (corresponding address 202), and 0 × FD (corresponding address 203), are received in sequence. Since no read-write collision occurs during clock cycles 2-5, the write operation for the data can be normally performed.
In order to meet the time limit requirement in the high-speed memory, the write data is firstly cached in a cache bar corresponding to the single-port memory block. Because only one group of write data can be contained in the cache space in the cache bar, when new write data needs to be cached in the cache bar, the current write data in the cache bar is written into the corresponding single-port storage block.
During clock cycles 13-15, a read request and a write request for the first single-port memory block are received simultaneously, that is, write data 0 × E1 (corresponding to address 106), 0 × E2 (corresponding to address 107), and 0 × E3 (corresponding to address 108) for the first single-port memory block and read requests for address 100, address 101, and address 102 of the first single-port memory block are received in sequence, during which a read-write collision occurs, and the read-write collision timing 405 is pulled high.
Since the read data for the address 100, the address 101 and the address 102 in the first single-port memory block are not cached in the shared cache block and are not retrieved from the first cache bank corresponding to the first single-port memory block, the read data for the address 100, the address 101 and the address 102 are sequentially read from the first single-port memory block. Due to the read/write collision, the write data 0 × E1 (corresponding to address 106), 0 × E2 (corresponding to address 107), and 0 × E3 (corresponding to address 108) are sequentially buffered in the entry6, entry7, and entry8 in the shared buffer block.
In clock cycle 16, a read request and a write request for the second single-port memory block are received at the same time, that is, a read request for the write data 0 × F1 (corresponding to address 206) of the second single-port memory block and a read request for the address 201 of the second single-port memory block are received at the same time, so that a read-write collision occurs, and the read-write collision timing 405 is pulled high.
Since the read data for the address 201 of the second single-port memory block is not cached in the shared cache block, and the read data for the address 201 is not retrieved from the second cache bank corresponding to the second single-port memory block, the read data for the address 201 is read from the second single-port memory block. Due to the read-write collision, the write data 0 × F1 (corresponding to address 206) needs to be buffered in the entry6 of the shared buffer block. Since the write data 0 × E1 (corresponding address 106) is currently stored in the entry6 of the shared cache block, the write data 0 × E1 (corresponding address 106) needs to be read from the entry6 before the write data 0 × F1 (corresponding address 206) is written into the entry 6. The read write data 0 × E1 (corresponding to address 106) is written into the first cache bank corresponding to the first single-port memory.
In clock cycle 17, a read request and a write request for the second single-port memory block are received simultaneously, that is, a write data 0 × F2 (corresponding to address 207) for the second single-port memory block and a read request for the address 202 of the second single-port memory block are received simultaneously, so that a read-write conflict occurs, and the read-write conflict timing 305 is pulled high.
Since the read data of the address 202 of the second single-port memory block is not cached in the shared cache block, and the read data of the address 202 is not retrieved from the second cache bank corresponding to the second single-port memory block, the read data of the address 202 is read from the second single-port memory block. Due to the read-write collision, the write data 0 × F2 (corresponding to address 207) needs to be buffered in the entry7 of the shared buffer block. Since the write data 0 × E2 (corresponding address 107) is currently stored in the entry7 of the shared cache block, the write data 0 × E2 (corresponding address 107) needs to be read from the entry7 before the write data 0 × F2 (corresponding address 207) is written into the entry 7.
Since the write data 0 × E1 (corresponding to the address 106) is cached in the first cache bank corresponding to the current first single-port memory, the write data 0 × E1 (corresponding to the address 106) is written into the first single-port memory, and then the write data 0 × E2 (corresponding to the address 107) is written into the first cache bank.
In clock cycle 18, a read request and a write request for the second single-port memory block are received at the same time, that is, a read request for the write data 0 × F3 (corresponding to address 208) of the second single-port memory block and a read request for the address 203 of the second single-port memory block are received at the same time, so that a read-write collision occurs, and the read-write collision timing 305 is pulled high.
Since the read data for the address 203 is retrieved in the second cache bank corresponding to the second single-port memory block, the read data for the address 203 is read from the second cache bank. Due to the read-write conflict, the write data 0 × F3 (corresponding to address 208) needs to be buffered in the entry8 of the shared buffer block. Since the write data 0 × E3 (corresponding address 108) is currently stored in the entry8 of the shared cache block, the write data 0 × E3 (corresponding address 108) needs to be read from the entry8 before the write data 0 × F3 (corresponding address 208) is written into the entry 8.
Since the write data 0 × E2 (corresponding to the address 107) is cached in the first cache bank corresponding to the current first single-port memory, the write data 0 × E2 (corresponding to the address 107) is written into the first single-port memory, and then the write data 0 × E3 (corresponding to the address 108) is written into the first cache bank.
Fig. 5 is a diagram illustrating a data processing apparatus 500 for a circuit having a memory function, the circuit including a plurality of single-port memory blocks and a shared cache block, the apparatus 500 including: a first reading module 501 configured to, in response to receiving a first read request and a first write request for a first single-port memory block of the plurality of single-port memory blocks in a first clock cycle, perform a read operation for the first single-port memory block according to the first read request, wherein the first write request includes first write data and a first write address; a write module 502 configured to send a cache request for first write data to a shared cache block to cache the first write data to a target location in the shared cache block, where the target location is a storage location in the shared cache block corresponding to a first write address of a first single-port storage block; and a transmitting module 503 configured to, in response to receiving a cache request for second write data for a target location of the shared cache block in a second clock cycle, perform an operation of transmitting the first write data to the first single-ported memory block before caching the second write data to the shared cache block.
It should be understood that the various modules of the apparatus 500 shown in fig. 5 may correspond to the various steps in the method 100 described with reference to fig. 1. Thus, the operations, features and advantages described above with respect to the method 100 are equally applicable to the apparatus 500 and the modules comprised thereby, and the operations, features and advantages described above with respect to the method 100 are equally applicable to the apparatus 500 and the modules comprised thereby. Certain operations, features and advantages may not be described in detail herein for the sake of brevity.
According to some embodiments, the second write data is from a second read request and a second write request of a second write request for a second single-ported memory block of the plurality of single-ported memory blocks in a second clock cycle, the second write request includes second write data and a second write address, and the target location is a memory location of the shared cache block corresponding to the second write address of the second single-ported memory block.
According to some embodiments, each of the shared cache block and the plurality of single-port memory blocks includes a same number of a plurality of memory cells, for each of the plurality of single-port memory blocks, the plurality of memory cells in the single-port memory block are in a one-to-one correspondence with the plurality of memory cells in the shared cache block, and wherein the first write address indicates a first memory cell of the plurality of memory cells of the first single-port memory block, and the target location is a target memory cell of the plurality of memory cells in the shared cache block that corresponds to the first memory cell.
According to some embodiments, the second write address indicates a second storage location of the plurality of storage locations of the second single-ported storage block, the target location being a target storage location of the plurality of storage locations of the shared cache block corresponding to the second storage location.
According to some embodiments, the apparatus further comprises; the first identification module is configured to identify a mapping relationship between a target location and a first single-port storage block after caching the first write data at the target location in the shared cache block.
According to some embodiments, each of the plurality of single-ported memory blocks has a corresponding cache stripe, and wherein the transmission module comprises: a first read submodule configured to read first write data from a shared cache block; the first write-in submodule is configured to cache first write data into a first cache bar corresponding to the first single-port storage block; a second read submodule configured to, in response to receiving a cache request for third write data for the first cache bank in a third clock cycle, read the first write data from the first cache bank before caching the third write data to the first cache bank; and a second write submodule configured to write the first write data into the first single-ported memory block.
According to some embodiments, the apparatus further comprises: the second identification module is configured to identify a mapping relationship between the first cache bank and the first write address after the first write data is cached in the first cache bank corresponding to the first single-port storage block.
According to some embodiments, the apparatus further comprises: a first determining module, configured to determine, in response to receiving a third read request for a third single-ported memory block in the plurality of single-ported memory blocks in a fourth clock cycle, whether data to be read by the third read request is stored in a third cache bank corresponding to the shared cache block and the third single-ported memory block; and the second reading module is configured to respond to the data to be read by the third read request stored in any one of the shared cache block and the third cache bar, and read the data from the shared cache block or the third cache bar in which the data is stored.
According to some embodiments, the apparatus further comprises: and the third reading module is configured to respond to the fact that the data to be read by the third read request is not stored in any one of the shared cache block and the third cache bar, and read the data from the third single-port storage block.
Although specific functionality is discussed above with reference to particular modules, it should be noted that the functionality of the various modules discussed herein can be separated into multiple modules and/or at least some of the functionality of multiple modules can be combined into a single module. Performing an action by a particular module discussed herein includes the particular module itself performing the action, or alternatively the particular module invoking or otherwise accessing another component or module that performs the action (or performs the action in conjunction with the particular module). Thus, a particular module performing an action can include the particular module performing the action itself and/or another module performing the action that the particular module invokes or otherwise accesses.
It should also be appreciated that various techniques may be described herein in the general context of software, hardware elements, or program modules. The various modules described above with respect to fig. 5 may be implemented in hardware or in hardware in combination with software and/or firmware. For example, the modules may be implemented as computer program code/instructions configured to be executed in one or more processors and stored in a computer-readable storage medium. Alternatively, the modules may be implemented as hardware logic/circuitry. For example, in some embodiments, one or more of the various modules described in fig. 5 may be implemented together in a System on Chip (SoC). The SoC may include an integrated circuit chip (which includes one or more components of a Processor (e.g., a Central Processing Unit (CPU), microcontroller, microprocessor, Digital Signal Processor (DSP), etc.), memory, one or more communication interfaces, and/or other circuitry), and may optionally execute received program code and/or include embedded firmware to perform functions.
According to an aspect of the present disclosure, there is provided a chip including: at least one processor; and a memory having a computer program stored thereon, wherein the computer program, when executed by the processor, causes the processor to perform any of the methods described above.
According to an aspect of the present disclosure, there is provided an electronic device including the chip described above.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to perform any one of the methods described above.
According to an aspect of the disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, causes the processor to perform any of the methods described above.
Illustrative examples of such electronic devices, non-transitory computer-readable storage media, and computer program products are described below in connection with fig. 6.
Fig. 6 illustrates an example configuration of an electronic device 1700 that may be used to implement the methods described herein. The data processing apparatus described above may also be implemented, in whole or at least in part, by an electronic device 1700 or similar device or system.
The electronic device 1700 can be a variety of different types of devices. Examples of electronic device 1700 include, but are not limited to: a desktop computer, a server computer, a notebook or netbook computer, a mobile device (e.g., a tablet, a cellular or other wireless telephone (e.g., a smartphone), a notepad computer, a mobile station), a wearable device (e.g., glasses, a watch), an entertainment device (e.g., an entertainment appliance, a set-top box communicatively coupled to a display device, a gaming console), a television or other display device, an automotive computer, and so forth.
The electronic device 1700 may include at least one processor 1702, memory 1704, communication interface(s) 1706, display device 1708, other input/output (I/O) devices 1710, and one or more mass storage devices 1712, which may be capable of communicating with each other, such as through a system bus 1714 or other appropriate connection.
Processor 1702 may be a single processing unit or multiple processing units, all of which may include single or multiple computing units or multiple cores. The processor 1702 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitry, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor 1702 can be configured to retrieve and execute computer readable instructions stored in the memory 1704, the mass storage device 1712, or other computer readable media, such as program code for an operating system 1716, program code for application programs 1718, program code for other programs 1720, and so forth.
Memory 1704 and mass storage device 1712 are examples of computer readable storage media for storing instructions that are executed by processor 1702 to implement the various functions described above. For example, memory 1704 may generally include both volatile and non-volatile memory (e.g., RAM, ROM, etc.). In addition, the mass storage device 1712 may generally include a hard disk drive, solid state drive, removable media, including external and removable drives, memory cards, flash memory, floppy disks, optical disks (e.g., CDs, DVDs), storage arrays, network attached storage, storage area networks, and the like. Memory 1704 and mass storage device 1712 may both be referred to herein collectively as memory or computer-readable storage media, and may be non-transitory media capable of storing computer-readable, processor-executable program instructions as computer program code, which may be executed by processor 1702 as a particular machine configured to implement the operations and functions described in the examples herein.
A number of programs may be stored on the mass storage device 1712. These programs include an operating system 1716, one or more application programs 1718, other programs 1720, and program data 1722, which can be loaded into memory 1704 for execution. Examples of such applications or program modules may include, for instance, computer program logic (e.g., computer program code or instructions) for implementing the following components/functions: method 100 (including any suitable steps of method 100), and/or additional embodiments described herein.
Although illustrated in fig. 6 as being stored in memory 1704 of electronic device 1700, modules 1716, 1718, 1720, and 1722, or portions thereof, can be implemented using any form of computer-readable media that is accessible by electronic device 1700. As used herein, "computer-readable media" includes at least two types of computer-readable media, namely computer-readable storage media and communication media.
Computer-readable storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer-readable storage media include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information for access by an electronic device. In contrast, communication media may embody computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism. Computer-readable storage media, as defined herein, does not include communication media.
One or more communication interfaces 1706 are used to exchange data with other devices, such as over a network, a direct connection, and the like. Such communication interfaces may be one or more of the following: any type of network interface (e.g., a Network Interface Card (NIC)), wired or wireless (such as IEEE 802.11 Wireless LAN (WLAN)) wireless interface, worldwide interoperability for microwave Access (Wi-MAX) interface, Ethernet interface, Universal Serial Bus (USB) interface, cellular network interface, BluetoothTMAn interface, a Near Field Communication (NFC) interface, etc. Communication interface 1706 may facilitate communications within a variety of networks and protocol types including wired networks (e.g., LAN, cable, etc.) and wireless networks (e.g., WLAN, cellular, satellite, etc.), the internet, and the like. The communication interface 1706 may also provide a connection withSuch as communication of external storage devices (not shown) in a storage array, network attached storage, storage area network, and the like.
In some examples, a display device 1708, such as a monitor, may be included for displaying information and images to a user. Other I/O devices 1710 may be devices that receive various inputs from a user and provide various outputs to the user, and may include touch input devices, gesture input devices, cameras, keyboards, remote controls, mice, printers, audio input/output devices, and so forth.
The techniques described herein may be supported by these various configurations of the electronic device 1700 and are not limited to specific examples of the techniques described herein. The functionality may also be implemented, in whole or in part, on a "cloud" using a distributed system, for example. The cloud includes and/or represents a platform for resources. The platform abstracts underlying functionality of hardware (e.g., servers) and software resources of the cloud. The resources may include applications and/or data that may be used when performing computing processing on a server remote from the electronic device 1700. Resources may also include services provided over the internet and/or over a subscriber network such as a cellular or Wi-Fi network. The platform may abstract resources and functionality to connect the electronic device 1700 with other electronic devices. Thus, implementations of the functionality described herein may be distributed throughout the cloud. For example, the functionality can be implemented in part on the electronic device 1700 and in part by a platform that abstracts the functionality of the cloud.
While the disclosure has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative and exemplary and not restrictive; the present disclosure is not limited to the disclosed embodiments. Variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed subject matter, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps not listed, the indefinite article "a" or "an" does not exclude a plurality, the term "a" or "an" means two or more, and the term "based on" should be construed as "based at least in part on". The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Claims (22)

1. A data processing method for a circuit having a memory function, the circuit comprising a plurality of single-ported memory blocks and a shared cache block, the method comprising:
in response to receiving a first read request and a first write request for a first one-port memory block of the plurality of one-port memory blocks in a first clock cycle, performing a read operation for the first one-port memory block according to the first read request, wherein the first write request includes first write data and a first write address;
sending a caching request for the first write data to the shared cache block so as to cache the first write data to a target position in the shared cache block, wherein the target position is a storage position in the shared cache block, which corresponds to a first write address of the first single-port storage block; and
in response to receiving a cache request for second write data for a target location of the shared cache block in a second clock cycle, performing an operation to transfer the first write data to the first single-ported storage block prior to caching the second write data to the shared cache block.
2. The method of claim 1, wherein the second write data is from a second read request and a second write request in the second clock cycle for a second single-ported memory block in the plurality of single-ported memory blocks, the second write request including the second write data and a second write address, the target location being a memory location in the shared cache block that corresponds to the second write address of the second single-ported memory block.
3. The method of claim 2, wherein each of the shared cache block and the plurality of single-port memory blocks contains a same number of multiple memory cells, the multiple memory cells in a single-port memory block having a one-to-one correspondence with the multiple memory cells in the shared cache block for each of the plurality of single-port memory blocks,
and wherein the first write address indicates a first storage location of the plurality of storage locations of the first single-ported storage block, and the target location is a target storage location of the plurality of storage locations of the shared cache block that corresponds to the first storage location.
4. The method of claim 3, wherein the second write address indicates a second storage location of the plurality of storage locations of the second single-ported storage block, and the target location is a target storage location of the plurality of storage locations of the shared cache block that corresponds to the second storage location.
5. The method of claim 1, further comprising;
after caching the first write data at a target location in the shared cache block, identifying a mapping relationship between the target location and the first single-ported storage block.
6. The method of claim 1, wherein each of the plurality of single-port memory blocks has a corresponding cache bank, and wherein the performing the operation of transferring the first write data to the first single-port memory block comprises:
reading the first write data from the shared cache block;
caching the first write data into a first cache bar corresponding to the first single-port storage block;
in response to receiving a cache request for third write data for the first cache bank in a third clock cycle, reading the first write data from the first cache bank prior to caching the third write data to the first cache bank; and
and writing the first write data into the first single-port storage block.
7. The method of claim 6, further comprising:
after the first write data is cached in a first cache bar corresponding to the first single-port storage block, identifying a mapping relation between the first cache bar and the first write address.
8. The method of claim 7, further comprising:
in response to receiving a third read request for a third single-port storage block in the plurality of single-port storage blocks in a fourth clock cycle, determining whether data to be read by the third read request is stored in a third cache bank corresponding to the shared cache block and the third single-port storage block; and
and in response to the fact that the data to be read by the third read request is stored in any one of the shared cache block and the third cache bar, reading the data from the shared cache block or the third cache bar in which the data is stored.
9. The method of claim 8, further comprising:
and in response to that neither the shared cache block nor the third cache bank stores the data to be read by the third read request, reading the data from the third single-port storage block.
10. A data processing apparatus for a circuit having a memory function, the circuit comprising a plurality of single-ported memory blocks and a shared cache block, the apparatus comprising:
a first read module configured to, in response to receiving a first read request and a first write request for a first one-port memory block of the plurality of one-port memory blocks in a first clock cycle, perform a read operation for the first one-port memory block according to the first read request, wherein the first write request includes first write data and a first write address;
a write module configured to send a cache request for the first write data to the shared cache block to cache the first write data to a target location in the shared cache block, where the target location is a storage location in the shared cache block that corresponds to a first write address of the first single-port storage block; and
a transmission module configured to perform an operation of transmitting the first write data to the first single-ported memory block before caching the second write data to the shared cache block in response to receiving a cache request for second write data for a target location of the shared cache block in a second clock cycle.
11. The apparatus of claim 10, wherein the second write data is from a second read request and a second write request in the second clock cycle for a second single-ported memory block in the plurality of single-ported memory blocks, the second write request including the second write data and a second write address, the target location being a memory location in the shared cache block that corresponds to the second write address of the second single-ported memory block.
12. The apparatus of claim 11, wherein each of the shared cache block and the plurality of single-port memory blocks contains a same number of multiple memory cells, the multiple memory cells in a single-port memory block having a one-to-one correspondence with the multiple memory cells in the shared cache block for each of the plurality of single-port memory blocks,
and wherein the first write address indicates a first storage location of the plurality of storage locations of the first single-ported storage block, and the target location is a target storage location of the plurality of storage locations of the shared cache block that corresponds to the first storage location.
13. The apparatus of claim 12, wherein the second write address indicates a second storage location of the plurality of storage locations of the second single-ported storage block, the target location being a target storage location of the plurality of storage locations of the shared cache block that corresponds to the second storage location.
14. The apparatus of claim 10, further comprising;
a first identification module configured to identify a mapping relationship between a target location in the shared cache block and the first single-ported storage block after caching the first write data at the target location.
15. The apparatus of claim 10, wherein each of the plurality of single-port memory blocks has a corresponding cache bank, and wherein the transmission module comprises:
a first read submodule configured to read the first write data from the shared cache block;
the first write-in submodule is configured to cache the first write data into a first cache bar corresponding to the first single-port storage block;
a second read submodule configured to, in response to receiving a cache request for third write data for the first cache bank in a third clock cycle, read the first write data from the first cache bank before caching the third write data to the first cache bank; and
and the second writing submodule is configured to write the first write data into the first single-port storage block.
16. The apparatus of claim 15, further comprising:
the second identification module is configured to identify a mapping relationship between the first cache bank and the first write address after the first write data is cached in the first cache bank corresponding to the first single-port storage block.
17. The apparatus of claim 16, further comprising:
a first determining module, configured to determine, in response to receiving a third read request for a third single-port storage block of the plurality of single-port storage blocks in a fourth clock cycle, whether data to be read by the third read request is stored in a third cache bank corresponding to the shared cache block and the third single-port storage block; and
and a second reading module, configured to, in response to that either one of the shared cache block and the third cache bar stores data to be read by the third read request, read the data from the shared cache block or the third cache bar storing the data.
18. The apparatus of claim 17, further comprising:
a third reading module, configured to read the data to be read by the third read request from the third single-port storage block in response to none of the shared cache block and the third cache bank storing the data.
19. A chip, comprising:
at least one processor; and
a memory having a computer program stored thereon,
wherein the computer program, when executed by the processor, causes the processor to perform the method of any of claims 1 to 9.
20. An electronic device comprising the chip of claim 19.
21. A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, causes the processor to carry out the method of any one of claims 1 to 9.
22. A computer program product comprising a computer program which, when executed by a processor, causes the processor to carry out the method of any one of claims 1 to 9.
CN202210399870.5A 2022-04-15 2022-04-15 Data processing method, device, chip, equipment and medium Pending CN114706531A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210399870.5A CN114706531A (en) 2022-04-15 2022-04-15 Data processing method, device, chip, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210399870.5A CN114706531A (en) 2022-04-15 2022-04-15 Data processing method, device, chip, equipment and medium

Publications (1)

Publication Number Publication Date
CN114706531A true CN114706531A (en) 2022-07-05

Family

ID=82173859

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210399870.5A Pending CN114706531A (en) 2022-04-15 2022-04-15 Data processing method, device, chip, equipment and medium

Country Status (1)

Country Link
CN (1) CN114706531A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116049032A (en) * 2023-03-30 2023-05-02 摩尔线程智能科技(北京)有限责任公司 Data scheduling method, device and equipment based on ray tracing and storage medium
CN116340214A (en) * 2023-02-28 2023-06-27 中科驭数(北京)科技有限公司 Cache data storage and reading method, device, equipment and medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116340214A (en) * 2023-02-28 2023-06-27 中科驭数(北京)科技有限公司 Cache data storage and reading method, device, equipment and medium
CN116340214B (en) * 2023-02-28 2024-01-02 中科驭数(北京)科技有限公司 Cache data storage and reading method, device, equipment and medium
CN116049032A (en) * 2023-03-30 2023-05-02 摩尔线程智能科技(北京)有限责任公司 Data scheduling method, device and equipment based on ray tracing and storage medium

Similar Documents

Publication Publication Date Title
CN114706531A (en) Data processing method, device, chip, equipment and medium
CN112214166B (en) Method and apparatus for transmitting data processing requests
KR102545689B1 (en) Computing system with buffer and method of operation thereof
US20110289243A1 (en) Communication control device, data communication method and program
US20190196989A1 (en) Method, Apparatus, and System for Accessing Memory Device
CN114201268B (en) Data processing method, device and equipment and readable storage medium
CN109086168A (en) A kind of method and its system using hardware backup solid state hard disk writing rate
CA3129982A1 (en) Method and system for accessing distributed block storage system in kernel mode
CN111064804A (en) Network access method and device
KR20200001208A (en) Convergence Semiconductor Apparatus and Operation Method Thereof, Stacked Memory Apparatus Having the Same
WO2013030628A1 (en) Integrated circuit device, memory interface module, data processing system and method for providing data access control
CN104866432A (en) Memory subsystem with wrapped-to-continuous read
US9734087B2 (en) Apparatus and method for controlling shared cache of multiple processor cores by using individual queues and shared queue
US20130282971A1 (en) Computing system and data transmission method
KR101103619B1 (en) Multi-port memory system and access control method thereof
CN115904259B (en) Processing method and related device of nonvolatile memory standard NVMe instruction
CN116909484A (en) Data processing method, device, equipment and computer readable storage medium
CN113296691A (en) Data processing system, method, device and electronic equipment
CN111949585A (en) Data conversion processing method and device
CN111949371A (en) Command information transmission method, system, device and readable storage medium
CA3238254A1 (en) Storage control method, storage controller, storage chip, network card, and readable medium
US20200242040A1 (en) Apparatus and Method of Optimizing Memory Transactions to Persistent Memory Using an Architectural Data Mover
US11188140B2 (en) Information processing system
US11822816B2 (en) Networking device/storage device direct read/write system
CN117880364B (en) Data transmission method, system and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination