CN113360420B - Memory control method and device - Google Patents

Memory control method and device Download PDF

Info

Publication number
CN113360420B
CN113360420B CN202010153078.2A CN202010153078A CN113360420B CN 113360420 B CN113360420 B CN 113360420B CN 202010153078 A CN202010153078 A CN 202010153078A CN 113360420 B CN113360420 B CN 113360420B
Authority
CN
China
Prior art keywords
memory
group
fpga
slices
chips
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010153078.2A
Other languages
Chinese (zh)
Other versions
CN113360420A (en
Inventor
刘磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Machinery Equipment Research Institute
Original Assignee
Beijing Machinery Equipment Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Machinery Equipment Research Institute filed Critical Beijing Machinery Equipment Research Institute
Priority to CN202010153078.2A priority Critical patent/CN113360420B/en
Publication of CN113360420A publication Critical patent/CN113360420A/en
Application granted granted Critical
Publication of CN113360420B publication Critical patent/CN113360420B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0647Migration mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention relates to a memory control method and a memory control device, belongs to the technical field of communication, and solves the problems that in the prior art, due to the fact that the cache of an FPGA is small, when the data bidirectional transmission rate is too high, the interrupt frequency is increased and the like. The memory control method comprises the following steps: applying for a memory chip area of the upper computer with continuous physical addresses, dividing the memory chip area into two equal memory spaces, wherein each memory space comprises a first group of memory chips and a second group of memory chips; opening up three memories to form three memory pools, wherein the first memory pool is used for managing the head addresses of all the memory chips; opening up three blocks of caches in the FPGA for transmitting memory chip addresses during data transmission; the FPGA inputs the first addresses of the first group of memory chips or the second group of memory chips in the first memory space of the first memory Chi Quchu into a first cache; and the FPGA performs writing operation on the first group of memory chips and/or performs reading operation on the second group of memory chips through DMA. The method reduces the interruption frequency of the upper computer driving layer.

Description

Memory control method and device
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a memory control method and apparatus.
Background
The field programmable gate array FPGA (Field Programmable GATE ARRAY) is a product which is further developed on the basis of a programmable device, is a semi-custom circuit in the field of Application Specific Integrated Circuits (ASICs), and not only solves the defect of custom circuits, but also overcomes the defect of limited gate circuits of the original programmable device.
When the FPGA is used for high-speed data acquisition in the industrial field, frequent communication can be carried out between the FPGA and the upper computer. The traditional method uses direct memory access DMA (Direct Memory Access) which allows hardware devices with different speeds to communicate without depending on a large amount of interrupt load of a CPU) to carry data between the FPGA and an upper computer, when the FPGA writes data, the cache of the FPGA sends interrupt information when a certain amount of the cache reaches, responds through an upper computer driving layer, then reads the data, and the upper computer needs to open up a large physical memory for caching and processing the data.
Because the cache of the FPGA is smaller, when the data bidirectional transmission rate is too high, the interrupt frequency is increased, and when the upper computer is embedded hardware, the scheduling period of the operating system is generally in the millisecond level, and the response burden is increased due to the too high interrupt frequency. And because the memory capacity is limited, a plurality of large special memories cannot be opened up to carry out information interaction, so that a single interface needs to be driven and designed, and the complexity and development period of the system are increased.
Disclosure of Invention
In view of the above analysis, the embodiment of the present invention aims to provide a memory control method and apparatus, so as to solve the problems that the cache of the FPGA is small, the interrupt frequency is increased when the data bidirectional transmission rate is too high, and when the host computer is embedded hardware, the response burden is increased due to the excessively high interrupt frequency because the scheduling period of the operating system is generally in the millisecond level.
In one aspect, an embodiment of the present invention provides a memory control method, including: applying for a memory chip area of the upper computer with continuous physical addresses, dividing the memory chip area into two equal memory spaces, wherein each memory space comprises a first group of memory chips and a second group of memory chips; opening up three memories to form three memory pools, wherein a first memory pool is used for managing the head addresses of all memory chips, a first group of memory chips and a second memory pool are used for storing data sent to the upper computer by the FPGA, and a second group of memory chips and a third memory pool are used for storing data sent to the FPGA by the upper computer; opening up three blocks of caches in the FPGA for transmitting memory chip addresses during data transmission; the FPGA is used for storing the first addresses of the first group of memory slices or the second group of memory slices in a first memory space of the first memory Chi Quchu into a first cache; and the FPGA performs writing operation on the first group of memory chips and/or performs reading operation on the second group of memory chips through DMA.
The beneficial effects of the technical scheme are as follows: the memory control method of the embodiment of the invention applies for two memory spaces and opens up three memory pools so as to reduce the interruption frequency of a driving layer of an upper computer during high-speed signal transmission, and realizes two-way communication with smaller resource consumption and control logic by opening up three FPGA caches and matching with the use of memory chips of the upper computer, thereby reducing the complexity of a system, enhancing the reliability of a transmission link and reducing the development period of the system.
Based on the further improvement of the method, the three-block cache further comprises a second cache and a third cache, wherein the first cache is used for storing the first address of the memory chip in the first memory pool; the second cache is used for storing the first address of the first group of memory chips; and the third buffer is used for storing the head address of the second group of memory chips.
Based on a further improvement of the above method, the first set of memory slices includes n memory slices, and the second set of memory slices includes m-n memory slices, where m is greater than n and m and n are both positive integers.
Based on a further improvement of the above method, the FPGA performing a write operation on the first set of memory slices by DMA includes: the FPGA takes out the first address of the first group of memory chips from the first memory pool and puts the first address into the first cache; the FPGA writes fixed-length data into a first memory chip in the first group of memory chips through the DMA; when the first memory chip is fully written, writing the address of the first memory chip into a second cache, and simultaneously continuing writing data into a second memory chip in the first group of memory chips; and continuing writing data into the rest memory slices in the first group of memory slices until the nth memory slice is fully written, and placing the head address of the nth memory slice into a second cache.
Based on a further improvement of the above method, the FPGA performing a read operation on the second set of memory slices by DMA includes: the FPGA starts to read the data in the (n+1) th memory chip in the second group of memory chips through the DMA; after the n+1th memory chip is empty, the first address of the n+1th memory chip is put into a third buffer memory, and meanwhile, the data in the n+2th memory chip is read; and continuing to read the data in the rest memory slices in the second group of memory slices until the mth memory slice is read, and placing the head address of the mth memory slice into a third cache.
Based on a further improvement of the method, the FPGA pushes the data in the third buffer into the second buffer.
Based on a further improvement of the above method, writing the first set of memory slices and/or reading the second set of memory slices further comprises: after the second set of memory slices is read, the first set of memory slices is written.
Based on a further improvement of the above method, the memory control method further includes: the FPGA triggers the upper computer to interrupt by controlling an external interrupt pin, and notifies the upper computer that the data transmission is finished; the upper computer responds to the interrupt and takes out the addresses of all the memory chips in the second buffer; and copying the data in the first group of memory chips into the second memory pool according to the addresses of all the memory chips in the second buffer, and/or writing the data which needs to be written into the FPGA into the second group of memory chips and putting the data into the third memory pool.
Based on a further improvement of the above method, the memory control method further includes: the FPGA is used for storing the first addresses of the first group of memory chips or the second group of memory chips in the second memory space of the first memory Chi Quchu into a first cache; the FPGA performs writing operation on the first group of memory chips in the second memory space and/or performs reading operation on the second group of memory chips in the second memory space through DMA; and after the writing operation and/or the reading operation of the second memory space are completed, the upper computer releases addresses of all the memory chips.
The beneficial effects of the further improved scheme are as follows: the memory control method of the embodiment of the invention simplifies the driving complexity when the embedded operating system is in high-speed communication with the FPGA, directly carries out data movement on the memory address by carrying out buffer design on the FPGA end, increases the flexibility of the memory operation of the upper computer, enhances the system stability, reduces the development period and is suitable for various high-speed interfaces of the FPGA end.
On the other hand, the embodiment of the invention provides a memory control device, which comprises an upper computer and an FPGA, wherein the upper computer comprises: the memory chip area with continuous physical addresses is divided into two equal memory spaces, and each memory space comprises a first group of memory chips and a second group of memory chips; the first memory pool is used for managing the first addresses of all memory slices, the first group of memory slices and the second memory pool are used for storing data sent to the upper computer by the FPGA, and the second group of memory slices and the third memory pool are used for storing data sent to the FPGA; the FPGA comprises: the three-block cache is used for transmitting memory chip addresses during data transmission, and the acquisition module is used for transmitting the first addresses of the first group of memory chips or the second group of memory chips in the first memory space of the first memory Chi Quchu and placing the first addresses into the first cache; and the read-write module is used for writing the first group of memory chips and/or reading the second group of memory chips through DMA.
Compared with the prior art, the invention has at least one of the following beneficial effects:
1. Applying for two memory spaces and opening up three memory pools to reduce the interrupt frequency of an upper computer driving layer, and realizing bidirectional communication with smaller resource consumption and control logic by opening up three smaller FPGA caches and being matched with memory chips of an upper computer for use;
2. The driving complexity of the embedded operating system and the FPGA in high-speed communication is simplified; and
3. Through carrying out the buffer memory design at the FPGA end, directly carry out data movement to the memory address, increased the flexibility of host computer memory operation, strengthened system stability, reduced development cycle to be applicable to the multiple high-speed interface of FPGA end.
In the invention, the technical schemes can be mutually combined to realize more preferable combination schemes. Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
Drawings
The drawings are only for purposes of illustrating particular embodiments and are not to be construed as limiting the invention, like reference numerals being used to refer to like parts throughout the several views.
FIG. 1 is a flow chart of a memory control method according to an embodiment of the invention;
FIG. 2 is a flow chart of a write operation in a memory control method according to an embodiment of the invention;
FIG. 3 is a flow chart of a read operation in a memory control method according to an embodiment of the invention;
FIG. 4 is a block diagram of a memory control device according to an embodiment of the invention; and
Fig. 5 is a diagram illustrating a memory control method according to an embodiment of the present invention.
Reference numerals:
400-upper computer; 402-a first memory space; 404-a second memory space; 406-a first set of memory slices; 408-a second set of memory slices; 410-a first memory pool; 412-a second memory pool; 414-a third memory pool; 416-FPGA; 418-three blocks of caches; 420-an acquisition module; 422-a read-write module;
Detailed Description
The following detailed description of preferred embodiments of the application is made in connection with the accompanying drawings, which form a part hereof, and together with the description of the embodiments of the application, are used to explain the principles of the application and are not intended to limit the scope of the application.
In one embodiment of the present invention, a memory control method is disclosed. As shown in fig. 1, the memory control method includes: step S102, applying for a memory chip area of an upper computer with continuous physical addresses and dividing the memory chip area into two equal memory spaces, wherein each memory space comprises a first group of memory chips and a second group of memory chips; step S104, opening up three memories to form three memory pools, wherein the first memory pool is used for managing the head addresses of all the memory chips, the first group of memory chips and the second memory pool are used for storing data sent to the upper computer by the FPGA, and the second group of memory chips and the third memory pool are used for storing data sent to the FPGA by the upper computer; step S106, opening up three blocks of caches in the FPGA for transmitting the memory chip address during data transmission; step S108, the FPGA stores the first addresses of the first group of memory slices or the second group of memory slices in the first memory space of the first memory Chi Quchu into a first cache; and step S110, the FPGA performs writing operation on the first group of memory chips and/or performs reading operation on the second group of memory chips through DMA.
Compared with the prior art, the memory control method provided by the embodiment applies for two memory spaces and opens up three memory pools so as to reduce the interrupt frequency of an upper computer driving layer during high-speed signal transmission, and the two-way communication is realized by opening up three smaller FPGA caches and matching with the use of memory chips of an upper computer with smaller resource consumption and control logic, so that the complexity of a system is reduced, the reliability of a transmission link is enhanced, and the development period of the system is shortened.
Hereinafter, a memory control method according to an embodiment of the present invention will be described in detail with reference to fig. 1 to 3, and fig. 5.
The memory control method of the embodiment of the invention comprises the steps of S102, applying for the memory chip area of the upper computer with continuous physical addresses, dividing the memory chip area into two equal memory spaces, opening up three memories to form three memory pools, wherein the first memory pool is used for managing the head addresses of all the memory chips, the first memory pool is used for storing data sent to the upper computer by the FPGA, the second memory pool is used for storing data sent to the FPGA by the upper computer, and the third memory pool is used for storing the data sent to the FPGA by the upper computer. Specifically, the first set of memory slices includes n memory slices, and the second set of memory slices includes m-n memory slices, where m is greater than n and m and n are positive integers.
Hereinafter, memory division of the upper computer will be described in detail by way of specific examples with reference to fig. 1 and 5.
The first step: the upper computer applies for a memory space with continuous physical addresses through a driving layer development tool (for example WINDRIVER), equally divides the whole memory space into 2m areas, and each area is called a memory chip, wherein the first m memory chips are divided into a first memory space, and the second m memory chips are divided into a second memory space. The first n (n < m) memory slices of each set (i.e., the first set of memory slices) are used to store data sent by the FPGA to memory, and the last m-n memory slices (i.e., the second set of memory slices) are used to store data sent by the memory to the FPGA. Meanwhile, the upper computer opens up three memories (see fig. 5) through an application program (such as a VS or QT application program), each memory area is called a memory pool, the memory pool 1 (i.e., a first memory pool) is used for storing the first addresses of all memory chips, the memory pool 2 (i.e., a second memory pool) is used for storing data sent by the FPGA to the upper computer, and the memory pool 3 (i.e., a third memory pool) is used for storing data sent by the upper computer to the FPGA.
The memory control method of the embodiment of the invention further comprises the following steps: step S106, a three-block buffer memory is opened up in the FPGA for transmitting the memory chip address during data transmission. Specifically, the three-block cache further comprises a second cache and a third cache, wherein the first cache is used for storing the first address of the memory chip in the first memory pool; the second cache is used for storing the first address of the first group of memory slices; and the third buffer is used for storing the head address of the second group of memory chips.
Hereinafter, the tunneling of three caches within an FPGA is described in detail by way of specific example with reference to fig. 1 and 5.
And a second step of: three caches (FIFO 1, FIFO2 and FIFO 3) are opened up in the FPGA and used for transferring memory chip addresses during data transmission, wherein the depths of the FIFO1, the FIFO2 and the FIFO3 (see FIG. 5) are 2m, and the number of the memory chips corresponds to 2m in the physical memory space of the upper computer. The FIFO1 (i.e., the first cache) is configured to obtain a first address of a memory slice stored in the memory pool 1; FIFO2 (i.e., the second buffer) is used to store the head address of the memory slice that the FPGA sends to the memory data. FIFO3 (i.e., the third buffer) stores the first address of the memory slice that is stored with the data sent to the FPGA.
The memory control method of the embodiment of the invention further comprises the following steps: in step S108, the FPGA locates and stores the first addresses of the first group of memory slices or the second group of memory slices in the first memory space of the first memory Chi Quchu in the first buffer. Specifically, when performing a write operation, the FPGA locates and stores the first addresses of the first group of memory slices in the first memory space of the first memory Chi Quchu in the first cache; optionally, when performing the read operation, the FPGA locates and stores the first addresses of the second set of memory slices in the first memory space of the first memory Chi Quchu in the first cache.
And a third step of: the upper computer transmits the first m data in the memory pool 1 to the FPGA in a mode of using a read-write register by driving, and the FPGA places the data into the FIFO 1. Optionally, when performing a read operation, the upper computer transfers m-n data after the data in the memory pool 2 to the FPGA by driving a read-write register, and the FPGA places the data into the FIFO 1.
The memory control method of the embodiment of the invention further comprises the following steps: in step S110, the FPGA performs a write operation on the first set of memory slices and/or a read operation on the second set of memory slices through DMA.
Specifically, referring to fig. 2, the fpga writing to the first set of memory slices by DMA includes: step S202, the FPGA receives the first addresses of a first group of memory chips from a first memory Chi Quchu and puts the first addresses into a first cache; step S204, the FPGA writes data with fixed length into a first memory chip in a first group of memory chips through DMA; step S206, when the first memory chip is full, writing the address of the first memory chip into the second cache, and simultaneously continuing writing data into the second memory chip in the first group of memory chips; and step S208, continuing writing data into the rest memory slices in the first group of memory slices until the nth memory slice is fully written, and placing the head address of the nth memory slice into the second cache.
Specifically, referring to fig. 3, the fpga performing a read operation on the second set of memory slices through DMA includes: step S302, the FPGA starts to read the data in the (n+1) th memory chip in the second group of memory chips through DMA; step S304, after the n+1th memory chip is empty, the first address of the n+1th memory chip is put into a third buffer memory, and the data in the n+2th memory chip is read at the same time; and step S306, continuing to read the data in the rest memory slices in the second group of memory slices until the mth memory slice is read, and placing the first address of the mth memory slice into the third buffer memory. The FPGA then pushes the data in the third cache into the second cache.
In an alternative embodiment, the writing operation to the first set of memory slices and/or the reading operation to the second set of memory slices further comprises: after performing the read operation on the second set of memory slices, performing a write operation (i.e., reversing the order of the write operation and the read operation) on the first set of memory slices; or only write operation is performed on the first group of memory slices and read operation is performed on the second group of memory slices; or only the second set of memory slices is read and not the first set of memory slices.
The memory control method of the embodiment of the invention further comprises the following steps: the FPGA triggers the upper computer to interrupt by controlling an external interrupt pin, and notifies the upper computer that the data transmission is finished at this time; the upper computer responds to the interrupt and takes out the addresses of all the memory chips in the second buffer; and copying the data in the first group of memory slices into the second memory pool according to the addresses of all the memory slices in the second buffer, and/or writing the data which need to be written into the FPGA into the second group of memory slices and putting the data into the third memory pool.
In addition, the memory control method of the embodiment of the invention further comprises the following steps: the FPGA puts the first addresses of the first group of memory chips or the second group of memory chips in the second memory space of the first memory Chi Quchu into the first cache; the FPGA performs writing operation on the first group of memory slices in the second memory space and/or performs reading operation on the second group of memory slices in the second memory space through DMA; and after the writing operation and/or the reading operation of the second memory space are completed, the upper computer releases the addresses of all the memory chips. In other words, after the read and write operations are completed on the first memory space, continuing to perform the read and write operations on the second memory space; or writing the first memory space, and then reading the second memory space; or a read operation is performed on the first memory space followed by a write operation to the second memory space.
Hereinafter, referring to fig. 5, step S110 of the memory control method, i.e., writing operations to the first set of memory slices and/or reading operations to the second set of memory slices, will be described in detail by way of specific example.
Fourthly, the FPGA starts to write fixed-length data into the first memory chip address through a DMA controller and a high-speed interface (such as PCIe);
Fifth step: after the memory slice is fully written, writing the memory initial address after the memory slice is fully written into the FIFO2, and simultaneously writing data with fixed length into the next memory slice address in the FIFO 1;
Sixth step: repeating the fifth step to write data until the nth memory chip is fully written, and putting the first address of the nth memory chip into the FIFO 2;
Seventh step: the FPGA starts to read the data in the (n+1) th memory chip address through the DMA controller;
Eighth step: after the memory slice is read empty, writing the memory initial address after the memory slice is read empty into the FIFO3, and simultaneously reading the data in the next memory slice in the FIFO 1;
Ninth step: repeating the eighth step to read the data until the mth memory chip is read, and putting the first address of the mth memory chip into the FIFO 3;
tenth step: the FPGA presses the data of the FIFO3 into the FIFO 2;
Eleventh step: the FPGA triggers the upper computer to interrupt by controlling an external interrupt pin, and notifies the upper computer that the data transmission is finished at this time. The upper computer responds to the interrupt and then takes out the addresses of all memory slices in the FIFO2 at the moment, copies the data in the first n memory slices into the memory pool 2 according to the memory slice addresses in the FIFO2, simultaneously writes the data which need to be written into the FPGA into the last m-n memory slices, and puts the data into the memory pool 3 for management;
twelfth step: the FPGA takes out the first address of the second group of memory chips of the memory pool 1, puts the first address into the FIFO1, and repeats the steps four to eleven;
Thirteenth step: the upper computer releases all memory chip addresses, and places the addresses of all memory chips into the memory pool 1 so as to be capable of continuously reading and/or writing the first memory space and the second memory space; and
Fourteenth step: repeating the third to thirteenth steps.
Optionally, when the writing operation is performed only on the first set of memory slices and the reading operation is performed on the second set of memory slices, omitting the seventh to tenth steps; when the read operation is performed only on the second group of memory chips and the write operation is not performed on the first group of memory chips, omitting the fourth step to the sixth step; or when the order of the write operation and the read operation is interchanged, that is, the write operation is performed after the read operation is performed first, the fourth to sixth steps are performed after the seventh to tenth steps are performed.
In one embodiment of the present invention, a memory control device is disclosed. As shown in fig. 4, the memory control device includes an upper computer 400 and an FPGA 416, where the upper computer 400 includes: two memory spaces, a memory slice region with consecutive physical addresses and divided into two equal memory spaces 402 and 404 and each including a first set of memory slices 406 and a second set of memory slices 408; three memory pools, a first memory pool 410 for managing the head addresses of all memory slices, a first set of memory slices 406 and a second memory pool 412 for storing data sent by the FPGA to the host, and a second set of memory slices 408 and a third memory pool 414 for storing data sent to the FPGA; the FPGA 416 includes: the three-block buffer 418 is used for transmitting the memory chip address during data transmission, and the obtaining module 420 obtains the first address of the first group of memory chips or the second group of memory chips in the first memory space of the first memory Chi Quchu and puts the first address into the first buffer; and a read/write module 422 that performs a write operation on the first set of memory slices and/or a read operation on the second set of memory slices by DMA.
In addition, the memory control device further includes other modules, and since the memory control method corresponds to the memory control device, the other modules of the memory control device are not described in detail for avoiding redundancy.
In the embodiment of the invention, assuming that the size of a single memory chip is 512KB, m is 128, n is 64 (the data quantity of simultaneous interaction of an upper computer and an FPGA is the same), three FIFO depths are 128, and when the bus clock frequency is 62.5Mhz and the data bus is 32 bits, the data flow is 250MB/s (uploading and downloading are 125 MB/s). Therefore, in the embodiment of the present invention, the interrupt frequency when the upper computer processes the data is: 512 x 64kb/250 MB/s= 131.07ms. In contrast, in the prior art, assuming that the DMA transfer mode is directly used in the FPGA, and the 512KB memory is used for data interaction, the interrupt frequency is: 512KB/250 MB/s=2.05 ms. Therefore, the new processing method reduces the interrupt frequency by 64 times, and reduces the time delay of the bidirectional interaction data.
The memory control method of the embodiment of the invention is used for simplifying the driving complexity when the embedded operating system is in high-speed communication with the FPGA, and the data of the memory address is directly moved by carrying out buffer design on the FPGA end, so that the flexibility of the memory operation of the upper computer is increased, the system stability is enhanced, the development period is reduced, and the method is suitable for various high-speed interfaces of the FPGA end.
Compared with the prior art, the embodiment of the invention can at least realize one of the following beneficial effects:
1. Applying for two memory spaces and opening up three memory pools to reduce the interrupt frequency of an upper computer driving layer, and realizing bidirectional communication with smaller resource consumption and control logic by opening up three smaller FPGA caches and being matched with memory chips of an upper computer for use;
2. The driving complexity of the embedded operating system and the FPGA in high-speed communication is simplified; and
3. Through carrying out the buffer memory design at the FPGA end, directly carry out data movement to the memory address, increased the flexibility of host computer memory operation, strengthened system stability, reduced development cycle to be applicable to the multiple high-speed interface of FPGA end.
Those skilled in the art will appreciate that all or part of the flow of the methods of the embodiments described above may be accomplished by way of a computer program to instruct associated hardware, where the program may be stored on a computer readable storage medium. Wherein the computer readable storage medium is a magnetic disk, an optical disk, a read-only memory or a random access memory, etc.
The present invention is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present invention are intended to be included in the scope of the present invention.

Claims (10)

1. A memory control method, comprising:
applying for a memory chip area of the upper computer with continuous physical addresses in the upper computer, dividing the memory chip area into two equal memory spaces, wherein each memory space comprises a first group of memory chips and a second group of memory chips;
opening up three memories in the upper computer to form three memory pools, wherein the three memory pools are positioned outside the two memory spaces, a first memory pool is used for managing the first addresses of all memory chips, a first group of memory chips and a second memory pool are used for storing data sent to the upper computer by an FPGA, and a second group of memory chips and a third memory pool are used for storing data sent to the FPGA by the upper computer;
opening up three blocks of caches in the FPGA for transmitting memory chip addresses during data transmission;
When the FPGA performs write operation on the first group of memory chips through DMA, the FPGA takes out the first address of the first group of memory chips from the first memory pool and puts the first address into a first cache; and
And when the FPGA performs reading operation on the second group of memory chips through DMA, the FPGA takes out the first address of the second group of memory chips from the first memory pool and puts the first address into a first cache.
2. The memory control method of claim 1, wherein the three-block cache further comprises a second cache and a third cache, wherein,
The first cache is used for storing the first address of the memory chip in the first memory pool;
the second cache is used for storing the first address of the first group of memory chips; and
The third buffer is configured to store a first address of the second set of memory slices.
3. The memory control method of claim 2, wherein the first set of memory slices comprises n memory slices, and the second set of memory slices comprises m-n memory slices, wherein m is greater than n and m, n are both positive integers.
4. The memory control method of claim 3, wherein the FPGA writing to the first set of memory slices by DMA comprises:
The FPGA takes out the first address of the first group of memory chips from the first memory pool and puts the first address into the first cache;
the FPGA writes fixed-length data into a first memory chip in the first group of memory chips through the DMA;
When the first memory chip is fully written, writing the address of the first memory chip into a second cache, and simultaneously continuing writing data into a second memory chip in the first group of memory chips; and
And continuing to write data into the rest memory slices in the first group of memory slices until the nth memory slice is fully written, and placing the head address of the nth memory slice into a second cache.
5. The memory control method of claim 3, wherein the FPGA performing a read operation on the second set of memory slices by DMA comprises:
The FPGA starts to read the data in the (n+1) th memory chip in the second group of memory chips through the DMA;
after the n+1th memory chip is empty, the first address of the n+1th memory chip is put into a third buffer memory, and meanwhile, the data in the n+2th memory chip is read; and
And continuing to read the data in the rest memory slices in the second group of memory slices until the mth memory slice is read, and placing the head address of the mth memory slice into a third cache.
6. The memory control method of claim 5, wherein the FPGA forces data in the third cache into the second cache.
7. The memory control method of claim 6, wherein writing the first set of memory slices and/or reading the second set of memory slices further comprises:
after the second set of memory slices is read, the first set of memory slices is written.
8. The memory control method according to claim 7, further comprising:
the FPGA triggers the upper computer to interrupt by controlling an external interrupt pin, and notifies the upper computer that the data transmission is finished;
the upper computer responds to the interrupt and takes out the addresses of all the memory chips in the second cache; and
Copying the data in the first group of memory slices into the second memory pool according to the addresses of all the memory slices in the second cache, and/or writing the data which need to be written into the FPGA into the second group of memory slices and putting the data into the third memory pool.
9. The memory control method according to claim 8, further comprising:
When the FPGA performs write operation on the first group of memory chips in the second memory space through DMA, the FPGA inputs the first address of the first group of memory chips in the second memory space of the first memory Chi Quchu into a first cache;
when the FPGA reads the second group of memory chips in the second memory space through DMA, the FPGA reads the first address of the second group of memory chips in the second memory space of the first memory Chi Quchu and puts the first address into a first cache; and
After the writing operation and/or the reading operation of the second memory space are completed, the upper computer releases addresses of all the memory chips.
10. A memory control device is characterized by comprising an upper computer and an FPGA,
The upper computer includes:
The memory chip area with continuous physical addresses is divided into two equal memory spaces, and each memory space comprises a first group of memory chips and a second group of memory chips;
The first memory pool is used for managing the first addresses of all memory slices, the first group of memory slices and the second memory pool are used for storing data sent to the upper computer by the FPGA, the second group of memory slices and the third memory pool are used for storing data sent to the FPGA, and the three memory pools are located outside the two memory spaces;
the FPGA comprises:
the three-block cache is used to pass the memory chip address during data transfer,
The acquisition and read-write module is used for taking out the first address of the first group of memory chips from the first memory pool and putting the first address into a first cache when the FPGA performs write operation on the first group of memory chips through DMA; and when the FPGA performs reading operation on the second group of memory chips through DMA, the FPGA takes out the first address of the second group of memory chips from the first memory pool and puts the first address into a first cache.
CN202010153078.2A 2020-03-06 2020-03-06 Memory control method and device Active CN113360420B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010153078.2A CN113360420B (en) 2020-03-06 2020-03-06 Memory control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010153078.2A CN113360420B (en) 2020-03-06 2020-03-06 Memory control method and device

Publications (2)

Publication Number Publication Date
CN113360420A CN113360420A (en) 2021-09-07
CN113360420B true CN113360420B (en) 2024-05-17

Family

ID=77524208

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010153078.2A Active CN113360420B (en) 2020-03-06 2020-03-06 Memory control method and device

Country Status (1)

Country Link
CN (1) CN113360420B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104169891A (en) * 2013-10-29 2014-11-26 华为技术有限公司 Method and device for accessing memory
CN104281539A (en) * 2013-07-10 2015-01-14 北京旋极信息技术股份有限公司 Cache managing method and device
CN106980556A (en) * 2016-01-19 2017-07-25 中兴通讯股份有限公司 A kind of method and device of data backup
WO2017157110A1 (en) * 2016-03-18 2017-09-21 深圳市中兴微电子技术有限公司 Method of controlling high-speed access to double data rate synchronous dynamic random access memory, and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9317444B2 (en) * 2013-03-15 2016-04-19 Vmware, Inc. Latency reduction for direct memory access operations involving address translation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104281539A (en) * 2013-07-10 2015-01-14 北京旋极信息技术股份有限公司 Cache managing method and device
CN104169891A (en) * 2013-10-29 2014-11-26 华为技术有限公司 Method and device for accessing memory
CN106980556A (en) * 2016-01-19 2017-07-25 中兴通讯股份有限公司 A kind of method and device of data backup
WO2017157110A1 (en) * 2016-03-18 2017-09-21 深圳市中兴微电子技术有限公司 Method of controlling high-speed access to double data rate synchronous dynamic random access memory, and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"High speed FPGA-based data acquisition system";Aboli Audumbar Khedkar等;《Microprocessors and Microsystems》;第49卷;全文 *
"高速PCIe传输FPGA设计与KMDF驱动实现";汪舒;《硕士电子期刊》;第2019年卷(第04期);全文 *

Also Published As

Publication number Publication date
CN113360420A (en) 2021-09-07

Similar Documents

Publication Publication Date Title
US6529416B2 (en) Parallel erase operations in memory systems
US5598551A (en) Cache invalidation sequence system utilizing odd and even invalidation queues with shorter invalidation cycles
US7333381B2 (en) Circuitry and methods for efficient FIFO memory
CN206557767U (en) A kind of caching system based on ping-pong operation structure control data buffer storage
US5925111A (en) System for alotting logical path number to logical interfaces and permitting logical interface to access selected I/O using logical path number when selected I/O is not in use
US4811280A (en) Dual mode disk controller
KR100673013B1 (en) Memory controller and data processing system with the same
US4825357A (en) I/O controller for multiple disparate serial memories with a cache
US20130242658A1 (en) System and method for accessing and storing interleaved data
US6571302B1 (en) Dynamic adjustment of multiple sequential burst data transfers
JP2000067574A (en) Semiconductor memory
US20040193782A1 (en) Nonvolatile intelligent flash cache memory
TW201111984A (en) Method for managing a memory device having multiple channels and multiple ways, and associated memory device and controller thereof
CN111475436A (en) Embedded high-speed SATA storage array system based on PCIE switching network
JPS6362054A (en) Multichannel memory access circuit
EP1513071B1 (en) Memory bandwidth control device
CN113360420B (en) Memory control method and device
CN112256203B (en) Writing method, device, equipment, medium and system of FLASH memory
CN116136748B (en) High-bandwidth NVMe SSD read-write system and method based on FPGA
EP0498065B1 (en) Variable data stripe system and method
KR100438736B1 (en) Memory control apparatus of performing data writing on address line
JPS6232494B2 (en)
JPH0628261A (en) Method and device for data transfer
KR20040066311A (en) Apparatus and method for data transmission in dma
KR102343599B1 (en) Memory controller and storage device including the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant