CN116069265B - Storage and data processing method, device and storage medium - Google Patents

Storage and data processing method, device and storage medium Download PDF

Info

Publication number
CN116069265B
CN116069265B CN202310283736.3A CN202310283736A CN116069265B CN 116069265 B CN116069265 B CN 116069265B CN 202310283736 A CN202310283736 A CN 202310283736A CN 116069265 B CN116069265 B CN 116069265B
Authority
CN
China
Prior art keywords
data
storage area
target data
storage
pointer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310283736.3A
Other languages
Chinese (zh)
Other versions
CN116069265A (en
Inventor
郑瀚寻
马学韬
杨龚轶凡
闯小明
周阳泓博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhonghao Xinying Hangzhou Technology Co ltd
Original Assignee
Zhonghao Xinying Hangzhou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhonghao Xinying Hangzhou Technology Co ltd filed Critical Zhonghao Xinying Hangzhou Technology Co ltd
Priority to CN202310283736.3A priority Critical patent/CN116069265B/en
Publication of CN116069265A publication Critical patent/CN116069265A/en
Application granted granted Critical
Publication of CN116069265B publication Critical patent/CN116069265B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a storage and a data processing method, a device and a storage medium, and relates to the technical field of data buffering. The method includes acquiring target data; and storing the target data into a storage area, wherein the storage area is pre-established, the storage area comprises at least one static random access memory, and the data storage capacity of each static random access memory is the same. The method of the invention splits a complete data into several sections and uses the serial static random access memory to access, so as to achieve the purpose that the static random access memory with small width can store data with larger width. Therefore, the invention can be suitable for data with different widths, thereby reducing the workload of data processing preparation work and improving the working efficiency. The problems of trouble and inconvenience caused by the need of writing data processing programs respectively facing different specifications of hardware and data with different widths are solved to a certain extent.

Description

Storage and data processing method, device and storage medium
Technical Field
The invention relates to the technical field of data buffering, in particular to a storage and data processing method, device and storage medium.
Background
When writing a software program for a hardware item, it is often necessary to use a number of different widths of storage areas. Due to hardware limitations, the data width that a storage area can buffer often depends on the width of the sram used therein, and if different widths of sram are used in the program, it is necessary to write different buffer programs for them separately.
In the prior art, buffer programs are required to be written correspondingly according to the width of data to be stored in the storage area, and the buffer programs with different data widths are not universal. Therefore, one project often needs to write a plurality of different buffer programs to adapt to data with different widths possibly existing in the project, so that the workload of programming and debugging is relatively large, the programming content is relatively repetitive, and the efficiency is low.
Disclosure of Invention
The invention aims to provide a storage and data processing method, device and storage medium, so as to solve the technical problem that buffer programs with different data widths are not universal, and one item needs to write a plurality of different buffer programs. In order to achieve the above purpose, the present invention provides the following technical solutions.
In a first aspect, the present invention proposes a reservoir comprising:
the acquisition module is used for acquiring target data;
a storage area comprising at least one static random access memory;
the control module is used for splitting the target data into a plurality of data segments, and the width of each data segment is smaller than or equal to the width of the static random access memory;
the control module is further configured to store each data segment in each static random access memory in the storage area in sequence according to the splitting order of the plurality of data segments;
the static random access storages are sequentially connected, and each static random access storage stores one data segment at most.
In a second aspect, the present invention proposes a data processing method, including:
acquiring target data;
storing the target data in a storage area, the storage area being pre-established, the storage area comprising at least one static random access memory;
the storing the target data in the storage area includes: splitting the target data into a plurality of data segments, each data segment having a width less than or equal to a width of the static random access memory;
according to the splitting sequence of the data segments, each data segment is sequentially stored into each static random access memory in the storage area;
The static random access storages are sequentially connected, and each static random access storage stores one data segment at most.
In a third aspect, the present invention provides a data processing apparatus comprising: a storage module and a processing module;
the processing module is used for:
acquiring target data;
and establishing a storage area in the storage module, wherein the storage area comprises at least one static random access storage module;
splitting the target data into a plurality of data segments, wherein the width of each data segment is smaller than or equal to the width of the static random access storage module;
according to the splitting sequence of the data segments, each data segment is sequentially stored into each static random access storage module in the storage area;
the static random access storage modules are sequentially connected, and each static random access storage device stores one data segment at most.
In a fourth aspect, the present invention proposes a computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes or a set of instructions, the at least one instruction, the at least one program, the set of codes or the set of instructions being loaded and executed by a processor to implement the data processing method described above.
Compared with the prior art, the method of the invention splits a complete data into a plurality of segments, and uses a plurality of serially connected static random access storages to access each split data segment, so as to achieve the purpose that the static random access storages with small width can store data with larger width. Therefore, the invention can be suitable for data with different widths, thereby reducing the workload of data processing preparation work and improving the working efficiency. The problems of trouble and inconvenience caused by the need of writing data processing programs respectively facing different specifications of hardware and data with different widths are solved to a certain extent.
Drawings
FIG. 1 is a flow chart of an embodiment of the method of the present invention;
FIG. 2 is a flow chart of storing target data in a storage area according to an embodiment of the method of the present invention;
FIG. 3 is a schematic diagram illustrating the internal structure of a storage area according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating the flow of data in a storage area according to an embodiment of the present invention;
FIG. 5 is a flowchart of a target data cache according to an embodiment of the present invention;
FIG. 6 is a flowchart of another embodiment of the present invention for buffering target data;
FIG. 7 is a schematic diagram of a structure of a storage device according to an embodiment of the present invention;
Fig. 8 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in further detail with reference to the accompanying drawings. It will be apparent that the described embodiments are merely one embodiment of the invention, and not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic may be included in at least one implementation of the invention. The terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may include one or more of the feature, either explicitly or implicitly. Moreover, the terms "first," "second," and the like, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," "including," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system/apparatus, article, or device that comprises a list of steps or units/modules is not necessarily limited to those steps or units/modules that are expressly listed or inherent to such process, method, article, or device, but may include other steps or units/modules that are not expressly listed.
An exemplary flow of a data processing method provided by the present invention is described below. FIG. 1 is a flow chart of a method for processing data according to an embodiment of the present invention, where the method or the operation steps of the flow chart are shown in the example or the flow chart, but more or less operation steps may be included based on conventional or non-inventive labor. The sequence of steps recited in the embodiments is only one way of a plurality of execution sequences, and does not represent a unique execution sequence, and when actually executed, may be executed in parallel or in accordance with the method or flow sequence shown in the embodiments or the drawings (e.g., a parallel processor or a multi-threaded processing environment).
First, a specific description is given of a data processing method provided by the present invention. As shown in fig. 1, the method includes:
s100: target data is acquired.
Specifically, the target data in the method of the present invention refers to any data that needs to be buffered when a software program is written for a hardware item. Of course, the method of the invention is not only aimed at data buffering when writing software programs for hardware items. In other application fields, the method can also be used for buffering data when the technical problems are the same as those in the background technology of the invention. In this case, the target data is arbitrary data that needs to be buffered.
It should be clear that in the method of the present invention, the target data may be any object. Including but not limited to pages, scripts, pictures, videos, files, programs, code, and the like. The method for acquiring the target data in the present invention may be any method in the prior art, and the present invention is not limited thereto. For example: calls etc. may be read directly from the database or made from other buffers.
In a specific embodiment of the present invention, the method for obtaining the target data is to sequentially read the corresponding data from the database according to the calling sequence of the data when the software program is written according to the hardware item.
S200: and storing the target data into a storage area, wherein the storage area is pre-established.
Specifically, as shown in fig. 3, the storage area 100 is pre-established, that is, the storage area 100 is established in hardware having a plurality of sram memories based on the target data to be buffered. The hardware with multiple static random access memories may be any common hardware device on the market, which is not limited. The created storage area 100 is at least capable of accommodating the next data, i.e. the storage capacity of the storage area 100 is at least one data.
Of course, in other embodiments of the present invention, the storage area 100 may not be pre-established. Instead, a corresponding storage area 100 is established before each data is buffered. It is easy to conceive that if the storage area 100 is built in advance, when the software program is written and two data adjacent in time sequence are called, the two data can use the same storage area 100 on different time nodes according to the calling sequence of the data, so that the process of building the storage area 100 can be reduced, and the data caching time can be saved.
As can be seen from the above, in the method of the present invention, each data to be buffered needs to have a storage area 100 adapted thereto. That is, the storage area 100 can accommodate the target data described below. For example: after the target data is acquired in step S100, if there is no storage area 100 adapted to the target data, the storage area 100 is established; if there is a storage area 100 corresponding to the target data, the storage area 100 corresponding to the target data may not be re-established.
In a specific embodiment of the method of the present invention, as shown in fig. 2, storing the target data in the storage area in step S200 includes:
S210: the main purpose of splitting the target data into a plurality of data segments is to store the target data into serial static random access storages in the form of data segments, so that the target data can be conveniently stored and obtained by subsequent splicing. It is therefore desirable to know that when splitting the target data, the width of each data segment needs to be less than or equal to the width of the sram in order to be able to store the split data segments in the respective sram later.
As can be seen from the above, the storage area 100 established by the method of the present invention includes at least one sram. It is readily appreciated that if the storage area 100 has only one sram, the maximum width of the data that the storage area 100 can buffer is equal to the width of one sram. If the storage area 100 has a plurality of sram memories, the maximum width of the data that can be buffered in the storage area 100 is equal to the sum of the widths of the plurality of sram memories. Thus, in the method of the present invention, as shown in fig. 3, the storage area 100 previously established has n static random access memories, where n is a positive integer greater than or equal to 1. The data storage capacity of each sram is the same in this embodiment. Of course, in other embodiments of the present invention, the data storage capacity of each sram may be different based on hardware or special requirements.
And S220, according to the splitting sequence of the plurality of data segments, sequentially storing each data segment into each static random access memory in the storage area.
Specifically, the purpose of storing each data segment according to the splitting sequence of the data segments is to splice each data segment according to the storing sequence when the data segments are subsequently fetched, so as to obtain the correct target data. In other embodiments of the method of the present invention, if each data segment can be marked, the data segments do not need to be stored according to the storage sequence of the data segments, and random storage can be performed. For example: a physical address is programmed into the data segment for subsequent determination of the splice location of the data segment.
It should be clear that if we subsequently splice the data segments in the order of storage, it is necessary to ensure that each sram in the storage area 100 is sequentially connected, and each sram stores at most one data segment. That is, only one data segment is stored in each sram, and the corresponding sequence of the sram is the sequence of the stored data segments in the target data. And when in splicing, the data segments are taken out according to the connection sequence of the static random access memory and are spliced in sequence, so that the target data can be obtained correctly.
Meanwhile, it should be appreciated that in the method of the present invention, in order to ensure that the utilization of the storage area 100 is maximized, it is preferable that the width of the data segment is equal to the width of the sram when the target data is split. That is, if: splitting the target data into n data segments, wherein the width of the first n-1 data segments is equal to the width of the static random access memory; the width of the nth data segment is equal to or less than the width of the sram. Wherein n is a positive integer greater than or equal to 1.
It is to be understood that, based on the above embodiments, in order to further improve the utilization of the storage area 100, in one embodiment of the present invention, each data segment is in a one-to-one correspondence with each sram.
It should be clear that the storage area 100 is pre-established according to the target data width required to be called by the software program written in the hardware item. Therefore, if the pre-established storage area 100 can only hold one target data, the storage area 100 can only be used for caching other data with a data width equal to or smaller than the target data width in the subsequent use process.
Thus, in one embodiment of the method of the present invention, in order to enable the storage area 100 to be used for caching all data to be called by a program, the storage area 100 may be established according to the data with the largest data width in all data. That is, the storage area 100 can sequentially cache all data at different time nodes. In another embodiment of the method of the present invention, the corresponding storage area 100 may be created for data with different data widths in advance. That is, each storage area 100 is used for buffering data with the same data width, and the data width of the buffered data is different from one storage area 100 to another.
As shown in fig. 1 and 2, after the target data is stored in the storage area 100 (i.e., each data segment is sequentially stored in each sram in the storage area), the method further includes S300: and if a target data output instruction is received, acquiring each data segment corresponding to the target data, and acquiring the target data based on each data segment so as to output the target data to the storage area.
It can be seen that the method provided by the above embodiments of the present invention can be applied to buffering data with different widths when writing a software program for a hardware item. The method of the invention splits a complete data into several segments for access, so as to achieve the purpose that the static random access memory with small width can also store data with larger width. Therefore, the invention can be suitable for data with different widths, thereby reducing the workload of data processing preparation work and improving the working efficiency. The problems of trouble and inconvenience caused by the need of writing data processing programs respectively facing different specifications of hardware and data with different widths are solved to a certain extent.
It should be clear that in the method of the present invention, the created storage area 100 may be used to store a plurality of data simultaneously. If the storage area 100 is capable of storing a plurality of data at the same time, the data in the storage area 100 is prevented from being overwritten or erroneously read. In a specific embodiment of the method of the present invention, after each data segment is sequentially stored in each sram in the storage area 100 in step S220, the method further includes: acquiring a first pointer and a second pointer, and acquiring a storage state of the storage area 100 according to the number of data between storage addresses respectively pointed by the first pointer and the second pointer, wherein the states of the storage area 100 comprise: the storage area 100 is empty and the storage area 100 is full. If the storage area 100 is full, stopping storing data into the storage area 100; if the storage area 100 is empty, reading data from the storage area 100 is stopped. Thus effectively preventing data in the storage area 100 from being overwritten or erroneously read.
Specifically, the storage address pointed by the first pointer stores the data with the earliest storage sequence in the storage area 100, and the storage address pointed by the second pointer stores the data with the latest storage sequence in the storage area 100. For example: as shown in fig. 4, fig. 4 is a schematic diagram illustrating the flow of data in the storage area 100. When x data can be stored in the storage area 100 at the same time, x is a positive integer greater than or equal to 1. Wherein, the data 1 is the data that first enters the storage area 100, and at this time, the first pointer points to the data 1; the data x is the data that last entered the storage area 100, and the second pointer points to the data x.
Further, according to the number of data between the storage addresses pointed to by the first pointer and the second pointer, the obtaining the state of the storage area 100 includes: if the number of data between the storage addresses pointed by the first pointer and the second pointer is 0, determining that the storage area 100 is empty; if the number of data between the storage addresses respectively pointed to by the first pointer and the second pointer is equal to the accommodating space of the storage area 100, it is determined that the storage area 100 is full. As shown in fig. 4, it is assumed that the storage area 100 can accommodate x data, where x is a positive integer greater than or equal to 1, and if the number of data between the storage addresses respectively pointed by the first pointer and the second pointer is equal to x data, it is determined that the storage area 100 is full; if the storage addresses pointed to by the first pointer and the second pointer are the same, it is determined that the storage area 100 is empty.
It should be clear that, in other embodiments of the present invention, if the storage area 100 is used to store data with different data widths at the same time, when determining whether the storage area 100 is full, it is necessary to determine whether the storage area 100 is full by the remaining storage capacity of the storage area 100 and the target data width of the storage area 100 that needs to be input.
Specifically, as shown in fig. 4, if the data x+1 needs to be input into the storage area 100, and x data already exist in the storage area 100, where x is a positive integer greater than or equal to 1, the difference between the storage addresses pointed by the first pointer and the second pointer is converted into the storage capacity occupied by the storage area 100. And subtracting the storage capacity occupied by the storage area 100 from the total storage capacity of the storage area 100 to obtain a remaining storage capacity. If the remaining storage capacity is smaller than the data width of data x+1, it is determined that the storage area 100 is full.
In one embodiment of the method of the present invention, as shown in fig. 5, after the target data is acquired in step S100, the method further includes: the target data is written into the buffer 200, and the buffer 200 is pre-established. It should be clear that in the method of the present invention, the in-buffer 200 may be built in the same storage hardware as the storage area 100, or may be built in different storage hardware separately. In the method of the present invention, the target data is split into n data segments in the ingress buffer 200, and then the n data segments are sequentially input into n sram of the storage area 100.
In another embodiment of the present invention, as shown in fig. 5, outputting the target data to the storage area in step S300 includes: the target data output from the storage area 100 is stored in the output buffer area 300, and the output buffer area 300 is pre-established. As described above, in the method of the present invention, the out buffer 300, the in buffer 200 and the storage area 100 may be built in the same storage hardware, or may be built in different storage hardware separately.
As can be seen from the above, in the method of the present invention, the storage area 100, the in-buffer area 200 and the out-buffer area 300 are all required to be capable of storing at least one data. The number of data that can be stored in the storage area 100, the in-buffer area 200, and the out-buffer area 300 is not limited in the method of the present invention.
In a preferred embodiment of the method of the present invention, the target data includes first target data and second target data, as shown in fig. 6. The ingress buffer 200 is capable of storing the two target data. Splitting the target data into a plurality of data segments in step S210, comprising: splitting the first target data into a plurality of first data segments and splitting the second target data into a plurality of second data segments. Specifically, the storage area 100 has a first storage area 101 corresponding to the first target data and a second storage area 102 corresponding to the second target data. Sequentially storing each data segment in each sram in the storage area in step S220, including: each first data segment is stored in the first storage area 101 and each second data segment is stored in the second storage area 102.
Further, when a plurality of target data needs to be cached, first two target data, i.e. the first target data and the second target data above are sequentially input into the buffer area 200 according to the order to be called by programming. The method of the present invention employs the first-in first-out principle, i.e. the data input into the buffer 200 is output from the output buffer 300. That is, in programming, data that needs to be called first is input first and output first.
Further, the time sequence of splitting the first target data and the second target data in the ingress buffer 200 may be varied, and may be sequentially split according to the input order, or may be split simultaneously. In the method of the present invention, it is preferable to split the first target data and the second target data at the same time, and input the first target data and the second target data into the storage area 100 at the same time, which is advantageous in saving data buffering time. Similarly, in the method of the present invention, it is preferable that the first target data and the second target data are simultaneously input from the storage area 100 to the buffer area 300.
It should be clear that in general, the storage area 100 can only be written to or read from separately at the same time node. That is, at the same time node, if data is read from the storage area 100, data cannot be written into the storage area 100; if data is written into the storage area 100, the data cannot be read from the storage area 100. Thus, the method of the present invention employs a method of simultaneously inputting two data into the storage area 100 and simultaneously outputting two data from the storage area 100 by previously creating the in-buffer area 200 and the out-buffer area 300. And in each reading and writing period, data can be written or read, so that a data flow path is formed.
In another embodiment of the method of the present invention, after each data segment is sequentially stored in each sram in the storage area 100 in step S220, the method further includes:
if the storage area 100 has no write operation and the storage area 100 stores target data, acquiring each data segment corresponding to the target data, and obtaining the target data based on each data segment to output the target data to the storage area 100. That is, the target data in the storage area 100 is input to and output from the buffer area 300.
Meanwhile, it is easily conceivable that data in the in-buffer 200 and the out-buffer 300 is prevented from being overwritten or misread. Whether full or empty exists can also be determined by the pointer mode, and details are not repeated here.
It should be clear that, if the buffer 300 is not pre-established in the method of the present invention, the target data output instruction mentioned above may be a data call instruction issued during programming. If the buffer 300 is pre-established in the method of the present invention, the target data output instruction mentioned above may be an instruction for which the buffer 300 is not full. The fact that the egress buffer 300 is not full means that the egress buffer 300 can also store data.
According to the method, one complete data can be split into a plurality of sections, and a plurality of serially connected static random access memories are used for accessing each split data section, so that the purpose that the static random access memories with small width can store data with larger width is achieved. Therefore, the invention can be suitable for data with different widths, thereby reducing the workload of data processing preparation work and improving the working efficiency. The problems of trouble and inconvenience caused by the need of writing data processing programs respectively facing different specifications of hardware and data with different widths are solved to a certain extent. Meanwhile, the method of the invention can also adopt a double-in and double-out data caching mode, so that the outside can read or write data in each reading and writing period of the storage area.
The following description will be made of the reservoir according to the present invention.
As shown in fig. 7, an embodiment of the present invention provides a reservoir 400, comprising:
an acquisition module 401, configured to acquire target data;
the storage area 100 includes at least one sram, and the data storage capacities of the sram are the same;
A control module 402, configured to split the target data into a plurality of data segments, where a width of each data segment is less than or equal to a width of the sram;
the control module 402 is further configured to sequentially store each data segment in each sram in the storage area 100 according to the splitting order of the plurality of data segments;
the static random access storages are sequentially connected, and each static random access storage stores one data segment at most.
In another embodiment of the present invention, the control module 402 is further configured to, when receiving a target data output instruction, obtain each data segment corresponding to the target data, obtain the target data based on each data segment, and output the target data to the storage area 100. It should be noted that the target data output instruction may be a control instruction sent from the outside, or may be an instruction sent by a certain module in the storage device of the present invention. For example: if the external world needs to call the data in the storage area 100, the external world sends a data call instruction to the control module 402, and the control module 402 outputs the data from the storage area 100. If data in the storage area 100 needs to be input into another storage area 100 inside the storage, another storage area (e.g., the output buffer 300 below) may not be fully stored as a data call instruction. The outside world in this embodiment refers to any entity or program that does not belong to the storage of the present invention and can send a data call instruction, for example: which may be a program, an interface, a chip or the like.
In one possible embodiment of the present invention, as shown in fig. 7, it further includes:
a write buffer 200, configured to write the target data acquired by the acquisition module 401; wherein the target data includes first target data and second target data;
the control module 402 is further configured to split the first target data into a plurality of first data segments and split the second target data into a plurality of second data segments.
The storage area 100 includes: a first storage area 100 corresponding to the first target data, a second storage area 100 corresponding to the second target data;
wherein the control module 402 is further configured to store each first data segment in the first storage area 100 and each second data segment in the second storage area 100.
In one possible embodiment of the present invention, as shown in fig. 7, it further includes:
an output buffer 300 for storing the target data output from the storage area 100;
the control module 402 is further configured to store the target data output from the storage area 100 into the egress buffer 300.
The control module 402 is further configured to, if the storage area 100 has no write operation and the storage area 100 stores target data, acquire each data segment corresponding to the target data, and obtain the target data based on each data segment, so as to output the target data to the storage area 100.
In another embodiment of the present invention, the control module 402 is further configured to obtain a first pointer and a second pointer, and obtain a state of the storage area 100 according to the number of data between storage addresses respectively pointed by the first pointer and the second pointer;
the storage address pointed by the first pointer stores the data with the earliest storage sequence in the storage area 100, and the storage address pointed by the second pointer stores the data with the latest storage sequence in the storage area 100; the states of the storage area 100 include: the storage area 100 is empty and the storage area 100 is full.
In another embodiment of the present invention, the control module 402 is further configured to determine that the storage area 100 is empty if the number of data between the storage addresses pointed to by the first pointer and the second pointer is 0;
if the number of data between the storage addresses respectively pointed to by the first pointer and the second pointer is equal to the accommodating space of the storage area 100, it is determined that the storage area 100 is full.
In another embodiment of the present invention, the control module 402 is further configured to stop storing data in the storage area 100 if the storage area 100 is full;
If the storage area 100 is empty, reading data from the storage area 100 is stopped.
As can be seen from the above, the storage provided in the embodiment of the present invention can split one complete data into several segments, and multiple serial sram memories are used to access each split data segment, so as to achieve the purpose that the sram with small width can store data with larger width. Therefore, the storage device can be suitable for data with different widths, thereby reducing the workload of data processing preparation work and improving the working efficiency. The problems of trouble and inconvenience caused by the need of writing data processing programs respectively facing different specifications of hardware and data with different widths are solved to a certain extent.
The following describes the data processing device provided by the invention in detail.
As shown in fig. 8, in an embodiment of the present invention, there is provided a data processing apparatus 500 including: a storage module 501 and a processing module 502.
It should be appreciated that in the data processing apparatus 500 of the present invention, the storage module 501 may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk, etc. The processing module 502 may be a commercially available processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
In the data processing apparatus 500 of the present invention
The processing module 502 is configured to:
acquiring target data;
and establishing a storage area in the storage module, wherein the storage area comprises at least one static random access storage module, and the data storage capacity of each static random access storage module is the same;
splitting the target data into a plurality of data segments, wherein the width of each data segment is smaller than or equal to the width of the static random access storage module;
according to the splitting sequence of the data segments, each data segment is sequentially stored into each static random access storage module in the storage area;
the static random access storage modules are sequentially connected, and each static random access storage device stores one data segment at most.
In a possible embodiment of the present invention, the processing module 502 is further configured to, if a target data output instruction is received, obtain each data segment corresponding to the target data, obtain the target data based on each data segment, and output the target data to the storage area.
In a possible embodiment of the present invention, the processing module 502 is further configured to create a buffer in the storage module 501, and write the target data obtained by the processing module 502 into the buffer, where the target data includes first target data and second target data;
And splitting the first target data into a plurality of first data segments and splitting the second target data into a plurality of second data segments.
In a possible embodiment of the present invention, the processing module 502 is further configured to split the storage area into a first storage area corresponding to the first target data and a second storage area corresponding to the second target data;
and storing each first data segment in the first storage area, and storing each second data segment in the second storage area.
In one possible embodiment of the invention, the processing module 502 is further configured to,
creating a buffer area in the storage module 501;
and storing the target data output by the storage area into the output buffer area.
In a possible embodiment of the present invention, the processing module 502 is further configured to, if the storage area has no write operation and the storage area stores target data, acquire each data segment corresponding to the target data, and obtain the target data based on each data segment, so as to output the target data to the storage area.
In a possible embodiment of the present invention, the processing module 502 is further configured to obtain a first pointer and a second pointer, and obtain a state of the storage area according to the number of data between storage addresses respectively pointed by the first pointer and the second pointer;
The storage address pointed by the first pointer stores data with the earliest storage sequence in the storage area, and the storage address pointed by the second pointer stores data with the latest storage sequence in the storage area;
the state of the storage area includes: the storage area is empty and the storage area is full.
In a possible embodiment of the present invention, the processing module 502 is further configured to determine that the storage area is empty if the number of data between the storage addresses pointed to by the first pointer and the second pointer is 0;
and if the number of data between the storage addresses respectively pointed by the first pointer and the second pointer is equal to the accommodating space of the storage area, judging that the storage area is full.
In one possible embodiment of the invention, the processing module 502 is further configured to,
if the storage area is full, stopping storing data into the storage area;
if the storage area is empty, stopping reading the data from the storage area.
As can be seen from the above, the data processing apparatus of the present invention can split one complete data into several segments, and access each split data segment by using a plurality of serial static random access storages, so as to achieve the purpose that the static random access storages with small width can store data with larger width. Therefore, the data processing device provided by the invention can be suitable for data with different widths, so that the workload of data processing preparation work is reduced, and the working efficiency is improved. The problems of trouble and inconvenience caused by the need of writing data processing programs respectively facing different specifications of hardware and data with different widths are solved to a certain extent.
Finally, the present invention provides a computer readable storage medium.
In particular, an embodiment of the present invention provides a computer readable storage medium, where at least one instruction, at least one program, a code set, or an instruction set is stored, where the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by a processor to implement a data processing method provided in any one embodiment of the present invention.
It should be apparent that computer-readable storage media of the present invention, including both permanent and non-permanent, removable and non-removable media, may be used to implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (28)

1. A reservoir, comprising:
the acquisition module is used for acquiring target data, wherein the target data is data to be buffered;
a plurality of storage areas, each storage area corresponding to target data with different data widths, each storage area comprising at least one static random access memory;
the control module is used for splitting the target data into a plurality of data segments, and the width of each data segment is smaller than or equal to the width of the static random access memory;
the control module is further used for sequentially storing each data segment into each static random access memory in the storage area corresponding to the target data width according to the splitting sequence of the plurality of data segments;
the static random access memories in each storage area are sequentially connected, and each static random access memory stores one data segment at most.
2. The reservoir of claim 1, wherein the fluid is a liquid,
the control module is further configured to, if a target data output instruction is received, obtain each data segment corresponding to the target data, obtain the target data based on each data segment, and output the target data to the storage area.
3. The reservoir of claim 2, further comprising:
the buffer area is used for writing the target data acquired by the acquisition module; wherein the target data includes first target data and second target data;
the control module is further configured to split the first target data into a plurality of first data segments and split the second target data into a plurality of second data segments.
4. The reservoir of claim 3, wherein the fluid is,
the storage area includes: a first storage area corresponding to the first target data, a second storage area corresponding to the second target data;
the control module is further configured to store each first data segment in the first storage area, and store each second data segment in the second storage area.
5. The reservoir according to any one of claims 2 to 4, further comprising:
the output buffer area is used for storing the target data output by the storage area;
the control module is also used for storing the target data output by the storage area into the output buffer area.
6. The reservoir according to any one of claims 1 to 4, wherein the control module is further configured to,
And if the storage area has no writing operation and the storage area stores target data, acquiring each data segment corresponding to the target data, and acquiring the target data based on each data segment so as to output the target data to the storage area.
7. The reservoir according to any of the claims 1 to 4, wherein,
the control module is also used for acquiring a first pointer and a second pointer, and acquiring the state of the storage area according to the number of data between the storage addresses respectively pointed by the first pointer and the second pointer;
the storage address pointed by the first pointer stores data with the earliest storage sequence in the storage area, and the storage address pointed by the second pointer stores data with the latest storage sequence in the storage area;
the state of the storage area includes: the storage area is empty and the storage area is full.
8. The reservoir of claim 7, wherein the fluid is a liquid,
the control module is further configured to determine that the storage area is empty if the number of data between the storage addresses pointed by the first pointer and the second pointer is 0;
and if the number of data between the storage addresses respectively pointed by the first pointer and the second pointer is equal to the accommodating space of the storage area, judging that the storage area is full.
9. The reservoir of claim 8, wherein the fluid is a liquid,
the control module is also used for stopping storing data into the storage area if the storage area is full;
if the storage area is empty, stopping reading the data from the storage area.
10. A method of data processing, comprising:
acquiring target data, wherein the target data is data to be buffered;
storing the target data into storage areas corresponding to the target data width, establishing a plurality of storage areas in advance, wherein each storage area corresponds to target data with different data widths, and each storage area comprises at least one static random access memory;
the storing the target data in the storage area includes: splitting the target data into a plurality of data segments, each data segment having a width less than or equal to a width of the static random access memory;
according to the splitting sequence of the data segments, sequentially storing each data segment into each static random access memory in a storage area corresponding to the target data width;
the static random access memories in each storage area are sequentially connected, and each static random access memory stores one data segment at most.
11. The data processing method of claim 10, wherein after sequentially storing each data segment in each sram in the storage area, the method further comprises:
and if a target data output instruction is received, acquiring each data segment corresponding to the target data, and acquiring the target data based on each data segment so as to output the target data to the storage area.
12. The data processing method according to claim 10, wherein after the target data is acquired, the method further comprises:
writing the target data into a buffer zone, wherein the buffer zone is pre-established, and the target data comprises first target data and second target data;
the splitting the target data into a plurality of data segments includes: splitting the first target data into a plurality of first data segments and splitting the second target data into a plurality of second data segments.
13. The method of claim 12, wherein storing each data segment in each sram in the storage area in turn comprises:
The storage area is provided with a first storage area corresponding to the first target data and a second storage area corresponding to the second target data;
storing each first data segment in the first storage area, and storing each second data segment in the second storage area.
14. The data processing method of claim 11, wherein the outputting the target data from the storage area comprises:
and storing the target data output by the storage area into an output buffer area, wherein the output buffer area is pre-established.
15. A data processing method according to any one of claims 10 to 13, wherein after storing the respective data segments in the respective sram in the storage area in sequence, the method further comprises:
and if the storage area has no writing operation and the storage area stores target data, acquiring each data segment corresponding to the target data, and acquiring the target data based on each data segment so as to output the target data to the storage area.
16. A data processing method according to any one of claims 10 to 13, wherein after storing the respective data segments in the respective sram in the storage area in sequence, the method further comprises:
Acquiring a first pointer and a second pointer, and acquiring the state of the storage area according to the number of data between storage addresses respectively pointed by the first pointer and the second pointer;
the storage address pointed by the first pointer stores data with the earliest storage sequence in the storage area, and the storage address pointed by the second pointer stores data with the latest storage sequence in the storage area;
the state of the storage area includes: the storage area is empty and the storage area is full.
17. The method of claim 16, wherein the obtaining the state of the storage area according to the number of data between the storage addresses respectively pointed to by the first pointer and the second pointer comprises:
if the number of data between the storage addresses pointed by the first pointer and the second pointer is 0, judging that the storage area is empty;
and if the number of data between the storage addresses respectively pointed by the first pointer and the second pointer is equal to the accommodating space of the storage area, judging that the storage area is full.
18. The data processing method of claim 17, wherein after the acquiring the state of the storage area, the method further comprises:
If the storage area is full, stopping storing data into the storage area;
if the storage area is empty, stopping reading the data from the storage area.
19. A data processing apparatus, comprising: a storage module and a processing module;
the processing module is used for:
acquiring target data, wherein the target data is data to be buffered;
establishing a plurality of storage areas in the storage module, wherein each storage area corresponds to target data with different data widths, and each storage area comprises at least one static random access storage module;
splitting the target data into a plurality of data segments, wherein the width of each data segment is smaller than or equal to the width of the static random access storage module;
according to the splitting sequence of the data segments, sequentially storing each data segment into each static random access storage module in a storage area corresponding to the target data width;
the static random access storage modules in each storage area are connected in sequence, and each static random access storage device stores one data segment at most.
20. The data processing apparatus of claim 19, wherein the data processing apparatus further comprises a data processing device,
The processing module is further configured to, if a target data output instruction is received, obtain each data segment corresponding to the target data, obtain the target data based on each data segment, and output the target data to the storage area.
21. The data processing device of claim 19, wherein the processing module is further configured to create a buffer in the storage module, write the target data acquired by the processing module into the buffer, the target data including first target data and second target data;
and splitting the first target data into a plurality of first data segments and splitting the second target data into a plurality of second data segments.
22. The data processing device of claim 21, wherein the processing module is further configured to split the storage area into a first storage area corresponding to the first target data and a second storage area corresponding to the second target data;
and storing each first data segment in the first storage area, and storing each second data segment in the second storage area.
23. The data processing device according to any one of claims 20 to 22, wherein the processing module is further configured to,
Establishing a buffer area in the storage module;
and storing the target data output by the storage area into the output buffer area.
24. The data processing device according to any one of claims 19 to 22, wherein the processing module is further configured to,
and if the storage area has no writing operation and the storage area stores target data, acquiring each data segment corresponding to the target data, and acquiring the target data based on each data segment so as to output the target data to the storage area.
25. The data processing apparatus according to any one of claims 19 to 22, wherein the processing module is further configured to obtain a first pointer and a second pointer, and obtain a state of the storage area according to the number of data between storage addresses to which the first pointer and the second pointer respectively point;
the storage address pointed by the first pointer stores data with the earliest storage sequence in the storage area, and the storage address pointed by the second pointer stores data with the latest storage sequence in the storage area;
the state of the storage area includes: the storage area is empty and the storage area is full.
26. The data processing device of claim 25, wherein the processing module is further configured to determine that the storage area is empty if the number of data between the storage addresses respectively pointed to by the first pointer and the second pointer is 0;
And if the number of data between the storage addresses respectively pointed by the first pointer and the second pointer is equal to the accommodating space of the storage area, judging that the storage area is full.
27. The data processing apparatus of claim 26, wherein the processing module is further configured to,
if the storage area is full, stopping storing data into the storage area;
if the storage area is empty, stopping reading the data from the storage area.
28. A computer readable storage medium having stored therein at least one instruction, at least one program, code set, or instruction set, the at least one instruction, the at least one program, the code set, or instruction set being loaded and executed by a processor to implement the data processing method of any of claims 10 to 18.
CN202310283736.3A 2023-03-22 2023-03-22 Storage and data processing method, device and storage medium Active CN116069265B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310283736.3A CN116069265B (en) 2023-03-22 2023-03-22 Storage and data processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310283736.3A CN116069265B (en) 2023-03-22 2023-03-22 Storage and data processing method, device and storage medium

Publications (2)

Publication Number Publication Date
CN116069265A CN116069265A (en) 2023-05-05
CN116069265B true CN116069265B (en) 2024-03-19

Family

ID=86180496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310283736.3A Active CN116069265B (en) 2023-03-22 2023-03-22 Storage and data processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN116069265B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101122887A (en) * 2007-01-17 2008-02-13 晶天电子(深圳)有限公司 Flash-memory card for caching a hard disk drive with data-area toggling of pointers
CN101894001A (en) * 2006-03-21 2010-11-24 联发科技股份有限公司 Storage device
CN102053921A (en) * 2009-11-05 2011-05-11 晨星软件研发(深圳)有限公司 Storage device and related data access method
CN103927265A (en) * 2013-01-04 2014-07-16 深圳市龙视传媒有限公司 Content hierarchical storage device, content acquisition method and content acquisition device
CN107153580A (en) * 2016-03-04 2017-09-12 北京忆恒创源科技有限公司 Obtain the devices and methods therefor of queue exact state
US10102150B1 (en) * 2017-04-28 2018-10-16 EMC IP Holding Company LLC Adaptive smart data cache eviction
CN110795372A (en) * 2018-08-03 2020-02-14 扬智科技股份有限公司 Data processing apparatus and direct memory access method
CN111209232A (en) * 2018-11-21 2020-05-29 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for accessing static random access memory
CN114297129A (en) * 2021-12-08 2022-04-08 北方信息控制研究院集团有限公司 ZYNQ 7010-based message-oriented dual-core communication implementation method
CN114610951A (en) * 2020-12-08 2022-06-10 国信君和(北京)科技有限公司 Data processing method and device, electronic equipment and readable storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101894001A (en) * 2006-03-21 2010-11-24 联发科技股份有限公司 Storage device
CN101122887A (en) * 2007-01-17 2008-02-13 晶天电子(深圳)有限公司 Flash-memory card for caching a hard disk drive with data-area toggling of pointers
CN102053921A (en) * 2009-11-05 2011-05-11 晨星软件研发(深圳)有限公司 Storage device and related data access method
CN103927265A (en) * 2013-01-04 2014-07-16 深圳市龙视传媒有限公司 Content hierarchical storage device, content acquisition method and content acquisition device
CN107153580A (en) * 2016-03-04 2017-09-12 北京忆恒创源科技有限公司 Obtain the devices and methods therefor of queue exact state
US10102150B1 (en) * 2017-04-28 2018-10-16 EMC IP Holding Company LLC Adaptive smart data cache eviction
CN110795372A (en) * 2018-08-03 2020-02-14 扬智科技股份有限公司 Data processing apparatus and direct memory access method
CN111209232A (en) * 2018-11-21 2020-05-29 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for accessing static random access memory
CN114610951A (en) * 2020-12-08 2022-06-10 国信君和(北京)科技有限公司 Data processing method and device, electronic equipment and readable storage medium
CN114297129A (en) * 2021-12-08 2022-04-08 北方信息控制研究院集团有限公司 ZYNQ 7010-based message-oriented dual-core communication implementation method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CDS-RSRAM: a Reconfigurable SRAM Architecture to Reduce Read Power with Column Data Segmentation;Han Xu.etc;《IEEE》;20201231;全文 *
GSVM:一种支持Gather/Scatter的向量存储器;陈海燕;刘胜;吴健虢;;国防科技大学学报;20200628(03);全文 *
大数据中心固态存储技术研究;郎为民;安海燕;姚晋芳;赵毅丰;;电信快报;20180210(02);全文 *
徐成等.《嵌入式***导论》.中国铁道出版社,2011,第130页. *
施琴等.《数字电路实验》.东南大学出版社,2021,第98-99页. *

Also Published As

Publication number Publication date
CN116069265A (en) 2023-05-05

Similar Documents

Publication Publication Date Title
US4115851A (en) Memory access control system
CN110059020B (en) Access method, equipment and system for extended memory
CN109977116B (en) FPGA-DDR-based hash connection operator acceleration method and system
US7694035B2 (en) DMA shared byte counters in a parallel computer
EP0090026A1 (en) Cache memory using a lowest priority replacement circuit.
CN113900974B (en) Storage device, data storage method and related equipment
CN115994122B (en) Method, system, equipment and storage medium for caching information
US8327122B2 (en) Method and system for providing context switch using multiple register file
CN116069265B (en) Storage and data processing method, device and storage medium
US20210157647A1 (en) Numa system and method of migrating pages in the system
CN110968538B (en) Data buffering method and device
AU2021339989B2 (en) Tri-color bitmap array for garbage collection
CN107102898B (en) Memory management and data structure construction method and device based on NUMA (non Uniform memory Access) architecture
US6697889B2 (en) First-in first-out data transfer control device having a plurality of banks
CN108846141B (en) Offline cache loading method and device
JPH02135562A (en) Queue buffer control system
CN116204124B (en) Data processing method and system based on conflict lock and electronic equipment
CN112988074B (en) Storage system management software adaptation method and device
CN110175053B (en) Picture loading method and device
WO2023137931A1 (en) Parsing method, parsing apparatus, electronic device, and computer storage medium
CN118170667A (en) Linux-based software performance test method, electronic equipment and storage medium
US20140095792A1 (en) Cache control device and pipeline control method
JP3917079B2 (en) How to determine the best access strategy
CN116662248A (en) Multi-CPU communication system and method, electronic device, and storage medium
JPS6235146B2 (en)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant