WO2017114091A1 - 一种nas数据访问的方法、***及相关设备 - Google Patents

一种nas数据访问的方法、***及相关设备 Download PDF

Info

Publication number
WO2017114091A1
WO2017114091A1 PCT/CN2016/108238 CN2016108238W WO2017114091A1 WO 2017114091 A1 WO2017114091 A1 WO 2017114091A1 CN 2016108238 W CN2016108238 W CN 2016108238W WO 2017114091 A1 WO2017114091 A1 WO 2017114091A1
Authority
WO
WIPO (PCT)
Prior art keywords
nas
target data
data
acceleration device
dmafs
Prior art date
Application number
PCT/CN2016/108238
Other languages
English (en)
French (fr)
Inventor
郭海涛
刘洪广
方钧炜
贺荣徽
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201680001864.2A priority Critical patent/CN108028833B/zh
Priority to EP16880881.4A priority patent/EP3288232B1/en
Publication of WO2017114091A1 publication Critical patent/WO2017114091A1/zh
Priority to US16/020,754 priority patent/US11275530B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0643Management of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/565Conversion or adaptation of application format or content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device

Definitions

  • the present invention relates to the field of storage, and in particular, to a NAS data access method, system, acceleration device, and NAS client.
  • NAS Network Attached Storage
  • C/S client/server
  • the shared system, the NAS server provides a network-based file sharing service for the NAS client, without the intervention of the Application Server (AS), allowing the user to access data on the network, and providing cross-platform file sharing function, so as to facilitate Technologies for accessing different hosts and application servers are becoming more widely used in enterprise data centers.
  • AS Application Server
  • the NAS client and the NAS server are connected through a network card interface, and are based on a Common Internet File System (CIFS) or a Network File System (NFS).
  • Communication in which CIFS is Microsoft-defined application logic, mainly used between the NAS client and the NAS server of the Windows operating system.
  • NFS is the application logic defined by Linux and Unix, and is mainly applied to the NAS client of the Linux operating system.
  • the protocol stack of the NAS includes: when the application software of the NAS client needs to access the NAS data of the NAS server, the NAS client first sends an access request message to the virtual file system (VFS).
  • VFS virtual file system
  • the VFS forwards the request message to NFS.
  • NFS forwards the request message through External Data Representation (XDR) and then sends it to the Remote Procedure Call (RPC) module.
  • the RPC module selects TCP/IP, UDP/ Network protocols such as IP or RDMA (the latter two protocols are not described in the figure).
  • PRC selects TCP/IP protocol, it needs to pass Open Network Computing (ONC) and Transmission Control Protocol (TCP).
  • ONC Open Network Computing
  • TCP Transmission Control Protocol
  • the network protocol such as PRC is processed, and then the request is sent to the server of the NAS through the underlying hardware device (such as a network card) and a driver (such as a network card driver).
  • the NAS server receives the NAS client's request through a similar and reverse process, and replies the corresponding information to the NAS client.
  • the above-mentioned heavy protocol processing causes the NAS client to have high CPU load, large memory usage, and unsatisfactory latency, which affects the overall performance and data access efficiency of the NAS client.
  • the present invention provides a NAS data access method, system, and related device, which can solve the problem of high CPU load, large memory occupation, and extended processing time of the NAS client existing in the NAS data access process in the prior art. Overall performance and data access efficiency of the NAS client.
  • a method for NAS data access is provided.
  • the method is applied to a NAS data access system, where the system includes a NAS client and an acceleration device, and the acceleration device includes a first interface and a second interface, and the acceleration device passes The first interface is connected to the NAS client, and is connected to the NAS server through the second interface.
  • the NAS client receives the access request message, and determines the operation object according to the information of the target data to be accessed carried in the access request message, that is, determines the directory and/or file to which the target data to be accessed belongs;
  • the file system type generates a first direct memory access DMAFS message, and sends a first DMAFS message to the acceleration device, where the acceleration device completes other protocol processing processes in the NAS protocol stack, where the preset file system type is used. Describes the format of the DMAFS packet.
  • the first DMAFS packet includes the operation type carried in the operation object and the access request message.
  • the DMAFS message includes a request number and a DMAFS data, where the DMAFS data includes an operation object, a parameter requested by the user, an execution status of the user request, and data, so that the acceleration device can operate the operation object in the first DMAFS message.
  • the operation type to the network file system NFS data and encapsulating the NFS data into a network transmission protocol message to the NAS server. Therefore, the protocol processing process of the NAS client is uninstalled, the CPU and memory load of the NAS client, and the processing delay of the access request are reduced, and the processing efficiency of the entire NAS data access system is improved.
  • the first interface is a high-speed peripheral component interconnect PCIe interface or a high-speed peripheral interface
  • the second interface is a network card interface.
  • the high-speed peripheral interface can be a lightning interface.
  • the NAS client receives the second DMAFS packet that is sent by the acceleration device and carries the operation result for the first DMAFS packet, where the operation result includes the target data to be accessed in the first DMAFS packet. And the directory and/or file to which the target data belongs.
  • the NAS client before the NAS client receives the access request message, The initialization process needs to be performed: the NAS client first sends a third DMAFS message for requesting the storage device to store the NAS data to the acceleration device; and then receives the mounting directory information sent by the acceleration device, and carries the mounted directory information.
  • the directory where the NAS data is stored is mounted to a local directory.
  • the NAS client updates the local directory of the NAS client according to the directory and/or file information to which the target data in the operation result belongs.
  • the NAS client After receiving the user access request message, the NAS client converts the access request message into a DMAFS message in a preset file system format, and sends the message to the acceleration device, and the NAS device in the prior art is completed by the acceleration device.
  • the protocol processing process of the protocol stack reduces the CPU and memory load of the NAS client and improves the processing efficiency of the entire NAS data access system.
  • a method for NAS data access is provided, which is applied to a NAS data access system, where the system includes a NAS client and an acceleration device, and the acceleration device includes a first interface and a second interface, and the acceleration device passes The first interface is connected to the NAS client, and is connected to the NAS server through the second interface.
  • the acceleration device receives the first DMAFS packet of the NAS client, acquires an operation object and the operation type of the target data to be accessed carried in the packet; and then converts the operation object and the operation type into a network file system NFS
  • the data is encapsulated into a network transmission protocol packet and sent to the NAS server, thereby completing the access process of the NAS data.
  • the network transmission protocol may be a Transmission Control Protocol/Internet Protocol TCP/IP, or a User Datagram Protocol/Internet Protocol UDP/IP, or a Remote Direct Data Access RDMA.
  • the NAS client and the acceleration device transmit data by using DMAFS packets, and after receiving the DMAFS packet, the acceleration device further completes the processing of the NAS protocol stack, and finally the operation object and operation of the target data to be accessed.
  • the type information is sent to the NAS server in the form of a network transport protocol packet, thereby reducing the CPU and memory load of the NAS client, reducing the processing delay, and improving the processing efficiency of the overall NAS data access system.
  • the first interface is a high-speed peripheral component interconnect PCIe interface or a high-speed peripheral interface
  • the second interface is a network card interface
  • the high-speed peripheral interface may be a lightning interface
  • the acceleration device when the network protocol is TCP/IP, the acceleration device first encapsulates the NFS data into the first external data identifier XDR packet; and then encapsulates the first XDR packet into the first remote procedure call. RPC packet; finally encapsulating the first RPC packet as the first TCP/IP packet, and the first TCP/IP packet is sent to the NAS server, so as to complete the data transmission between the acceleration device and the NAS client when the network protocol is TCP/IP.
  • the acceleration device when the network protocol is UDP/IP, the acceleration device first encapsulates the NFS data with the first external data identifier XDR packet, and then encapsulates the first XDR packet into the first remote procedure call. RPC packet; finally, the first RPC packet is encapsulated into the first UDP/IP packet, and the first UDP/IP packet is sent to the NAS server, thereby completing the acceleration device when the network protocol is UDP/IP. Data transfer between the NAS client and the NAS client.
  • the acceleration device when the network protocol is RDMA, the acceleration device first encapsulates the NFS data into the first external data identifier XDR packet, and then encapsulates the first XDR packet into the first remote procedure call RPC packet. Then, the first RPC message is encapsulated into a first RDMA message, and the first RDMA message is sent to the NAS server, thereby completing data transmission between the acceleration device and the NAS client when the network protocol is RDMA. .
  • the acceleration device needs to complete the initialization process of the data, including sending the first request message to the NAS server, and receiving the storage NAS data sent by the NAS server.
  • the directory information of the directory is mounted, and then the directory storing the NAS data is mounted to the local directory according to the mounted directory information, wherein the first request message is used to request the NAS server to store the directory of the NAS data.
  • the acceleration device receives the third DMAFS message sent by the NAS client, and the third DMAFS message is used by the NAS client to request the storage device to store the NAS data directory.
  • the acceleration device sends the mount directory information to the NAS client, so that the NAS client mounts the directory storing the NAS data to the local directory according to the mount directory information.
  • the NAS server after receiving the network packet carrying the operation object and the operation type, performs a read request operation or a write request operation on the target data, and sends the operation result to the acceleration device, and operates.
  • the result includes the target data and the directory and/or file to which the target data belongs.
  • the acceleration device receives the network protocol message sent by the NAS server and carries the operation result of the target data; and then, the acceleration device according to the preset file
  • the system type generates a second DMAFS packet, where the preset file system type is used to describe the format of the DMAFS packet, the second DMAFS packet includes the operation result, and the second DMAFS packet is sent to the NAS client. .
  • the acceleration device further includes a data buffer, and the data is slowed down.
  • the storage area is used as a cache area of NFS, and stores historical data of the processed access request message. For example, when the user performs an operation of reading a request or an operation of writing a request, the operation of the read request or the operation of the write request may be performed.
  • the data is stored in the data buffer.
  • the acceleration device performs an operation on the target data according to the operation object and the operation type, and sends the pair to the NAS client. The result of the operation of the target data.
  • the capacity of the data stored in the data buffer of the acceleration device may be controlled by a preset threshold.
  • the acceleration device may delete the history of the first stored specified capacity according to the preset configuration. data.
  • the acceleration device may acquire the target data and the directory to which the target data belongs in the data buffer and/or Or file; then, the target data and the directory and/or file to which the target data belongs are transmitted to the NAS client, thereby improving the processing efficiency of the operation of the read request and reducing the processing implementation of the operation of the read request.
  • the acceleration device when the target data exists in the data buffer of the acceleration device and the operation type is a write request operation, the acceleration device first acquires the target data and performs a write request operation on the target data;
  • the NAS server sends the operation object and the operation type; the response information of the operation of receiving the write request of the NAS server to the target data, wherein the response information of the operation of the write request is used to indicate whether the write operation of the target data is successful;
  • the acceleration device when the target data does not exist in the data buffer, the acceleration device first sends the operation object and the operation type to the NAS server; and then receives the operation result of the target data sent by the NAS server. .
  • the acceleration device when the target data does not exist in the data buffer and the operation type is a read request, the acceleration device first sends the operation object and the operation type to the NAS server; and then receives the target data sent by the NAS server.
  • the result of the operation of the read request wherein the operation result of the read request includes the target data and the directory and/or file to which the target data belongs; the operation result is stored in the data buffer, and the NAS client is The end sends the result of the operation.
  • the operation object and the operation type are sent to the NAS server; receiving the NAS The response information of the operation result of the write request for the target data sent by the server; storing the target data in the data buffer; and transmitting the response information of the write operation to the NAS client.
  • the acceleration device updates the local directory of the acceleration device according to the directory and/or file information to which the target data in the operation result belongs.
  • the acceleration device after receiving the access request message of the NAS client, the acceleration device continues to complete the protocol processing in the prior art, completes the data transmission with the NAS server, thereby reducing the CPU load and memory usage of the NAS client, and NAS data access. Delay. Further, the data cache area of the acceleration device is used to buffer the accessed historical data, improve the efficiency of NAS data processing, reduce the data access delay, and improve the processing efficiency of the entire NAS system.
  • the present invention provides a NAS data access system, the system comprising a NAS client and an acceleration device, the acceleration device comprising a first interface and a second interface, the acceleration device being connected through the first interface
  • the NAS client is connected to the NAS server through the second interface, where the NAS client is used to perform the operation steps in the first aspect or the first aspect, where the acceleration device is configured to perform The operational steps of any of the possible implementations of the second aspect or the second aspect.
  • the present invention provides a NAS client for NAS data access, the NAS client comprising various modules for performing the NAS data access method in the first aspect or the first aspect of any of the possible implementations.
  • the present invention provides an acceleration device for NAS data access, the NAS client comprising various modules for performing the fault processing method in the second aspect or the second aspect of the possible implementation.
  • the present invention provides a NAS client for NAS data access, where the NAS client includes a processor, a memory, and a communication bus, and the processor and the memory are connected by a communication bus and complete communication with each other.
  • the processor executes a computer execution instruction in the memory to perform the first aspect or the first aspect by using hardware resources in the NAS client A method in any of the possible implementations.
  • a seventh aspect a computer readable medium for storing a computer program, the computer program comprising instructions for performing the method of the first aspect or any of the possible implementations of the first aspect.
  • the present invention provides an acceleration device for NAS data access, the acceleration device including The processor, the memory, the user interface, the network interface, the communication bus, the acceleration device is connected to the NAS client through the user interface, and is connected to the NAS server through the network interface, and the processor, the memory, the user interface and the network interface are connected through the communication bus and Completing communication with each other, the memory is for storing computer execution instructions, and when the acceleration device is running, the processor executes a computer execution instruction in the memory to execute a second with hardware resources in the acceleration device Aspect or method of any of the possible implementations of the second aspect.
  • a ninth aspect a computer readable medium for storing a computer program, the computer program comprising instructions for performing the method of any of the second aspect or the second aspect of the second aspect.
  • the NAS data access method, system, and related device in the embodiment of the present invention complete the protocol processing process of the NAS client to the NAS protocol stack in the prior art by using the acceleration device, thereby reducing the CPU and memory load of the NAS client.
  • PCIe or high-speed peripheral interface communication between the acceleration device and the client uses DMAFS packets for data transmission, which reduces processing delay.
  • the data buffer of the acceleration device is used to cache the historical data, and the efficiency of the read data processing can be improved during the data access process, thereby improving the overall performance and data access efficiency of the NAS client.
  • the present application may further combine to provide more implementations.
  • FIG. 1A is a schematic diagram of a NAS system architecture in the prior art according to the present invention.
  • FIG. 1B is a schematic diagram of a NAS system protocol stack in the prior art according to the present invention.
  • FIG. 2A is a schematic structural diagram of hardware of a NAS system according to the present invention.
  • 2B is a schematic structural diagram of hardware of another NAS system provided by the present invention.
  • 2C is a schematic structural diagram of hardware of another NAS system provided by the present invention.
  • 2D is a schematic structural diagram of hardware of another NAS system provided by the present invention.
  • FIG. 3 is a schematic diagram of an initialization operation flow of a NAS data access method according to the present invention.
  • 3A is a schematic diagram of a DMAFS packet format provided by the present invention.
  • FIG. 4 is a schematic diagram of another flow of a NAS data access method provided by the present invention.
  • FIG. 5 is a schematic diagram of a NAS system protocol stack provided by the present invention.
  • FIG. 6A is a schematic diagram of a NAS data access device according to the present invention.
  • 6B is a schematic diagram of another NAS data access apparatus provided by the present invention.
  • FIG. 7 is a schematic diagram of a NAS client provided by the present invention.
  • FIG. 8 is a schematic diagram of another NAS client provided by the present invention.
  • Figure 9 is a schematic view of an acceleration device provided by the present invention.
  • Figure 10 is a schematic illustration of another acceleration device provided by the present invention.
  • a NAS data access method provided by the present invention is further described below with reference to the accompanying drawings.
  • FIG. 2A is a logic block diagram of a NAS data access system according to an embodiment of the present invention.
  • the system includes a NAS client and an acceleration device, and the acceleration device is interconnected by a high-speed peripheral component (PCIe).
  • PCIe peripheral component
  • the interface is connected to the NAS client and connected to the NAS server through the network card.
  • the acceleration device and the NAS client respectively belong to one endpoint device in the PCIe topology, and the two communicate through the PCIe bus.
  • FIG. 2B is a logic block diagram of another NAS data access system provided by the present invention.
  • the difference from FIG. 2A is that the acceleration device may not be configured with a network card, and the acceleration device passes through the PCIe and the NAS client. After being connected, the NAS client's network card is used for data transmission with the NAS server.
  • a central processing unit may be added to the network card of the NAS client, and the network card is used as an acceleration device, respectively, through the PCIe interface.
  • the NAS client is connected to the NAS server through the network card interface of the NAS client.
  • FIG. 2C is a logic block diagram of another NAS data access system according to an embodiment of the present invention.
  • the NAS client is a client machine with strict restrictions on the airframe, for example, the NAS client is a MAC Pro.
  • the general PCIe interface card is difficult to insert into the NAS client.
  • the acceleration device can be connected to the NAS client through a high-speed peripheral interface, such as a high-speed peripheral interface such as a lightning bolt (ThunderBolt) interface.
  • the acceleration device communicates with the NAS server through its own network card.
  • the high-speed peripheral interface is a lightning interface
  • the connection between the lightning interfaces needs to be controlled by the lightning interface chip.
  • the network device may not be configured in the acceleration device, and after being connected to the NAS client through the high-speed peripheral interface, the network card of the NAS client communicates with the NAS server.
  • the network connected to the NAS server in the acceleration device of FIG. 2A to FIG. 2D can use the Ethernet as shown in the figure, or can use other network types, such as lossless Ethernet data center bridging (Data Center Bridging, DCB). It is not intended to limit the invention.
  • DCB Data Center Bridging
  • the invention completes the protocol processing in the NAS data access process by using the acceleration device as shown in FIG. 2A to FIG. 2D, thereby simplifying the protocol processing process of the NAS client, thereby reducing the CPU and memory loss of the NAS client.
  • a NAS data access method provided by the present invention is further introduced in conjunction with FIG. 3. As shown in the figure, before the NAS client accesses the NAS data, the NAS client, the acceleration device, and the NAS server need to perform an initialization operation.
  • the initialization steps of the method include:
  • the acceleration device sends a first request message to the NAS server.
  • the first request message is used to request the NAS server to send a directory storing the NAS data to the acceleration device.
  • the acceleration device may request the NAS server to send the directory information of the specified level according to the preset configuration. For example, when the preset configuration request is initialized, only the NAS server is requested to send the first-level root directory information for storing the NAS data; or preset configuration. When initialization is required, the NAS server is requested to send all the directory information of the stored NAS data to the acceleration device.
  • the protocol processing in the prior art is still used between the acceleration device and the NAS server.
  • the network protocol used by the acceleration device and the NAS server is the Transmission Control Protocol/Internet Protocol (TCP/IP) ), that is, as shown in FIG. 1B, the protocol stack performs data transmission, and the acceleration device first converts the data to be transmitted into NFS data, for example, converts the user's access request message into a NFS-recognable related parameter, and performs the file name to be operated.
  • TCP/IP Transmission Control Protocol/Internet Protocol
  • the NFS data is encapsulated into an XDR packet, which includes encapsulating the request parameter into a specific location of the packet; and the XDR packet is encapsulated into an RPC packet, and the encapsulation process includes adding an RPC sequence to the XDR packet.
  • the information such as the number and the check code is finally encapsulated into a TCP/IP packet and transmitted to the NAS server through the network card, thereby being compatible with the protocol processing process of the NAS data access in the prior art.
  • the NAS server sequentially parses the TCP/IP packets sent by the acceleration device in the reverse order. The message is processed and the processing result is sent to the acceleration device.
  • the acceleration device when the acceleration device and the NAS server use the User Datagram Protocol/Intenet Protocol (UDP/IP) for data transmission, the acceleration device first converts the data to be transmitted into NFS data;
  • the NFS data is encapsulated into an XDR packet.
  • the XDR packet is encapsulated into an RPC packet.
  • the RPC packet is encapsulated into a UDP/IP packet and transmitted to the NAS server through the network card.
  • the server parses the UDP/IP packet sent by the acceleration device in the reverse order, processes the request message in the packet, and sends the processing result to the acceleration device.
  • the acceleration device when the acceleration device and the NAS server use Remote Direct Memory Access (RDMA) for data transmission, the acceleration device first converts the data to be transmitted into NFS data; and then encapsulates the NFS data into XDR.
  • the packet is encapsulated into an RPC packet.
  • the RPC packet is encapsulated into an RDMA packet and transmitted to the NAS server through the network card.
  • the NAS server parses the acceleration in reverse order.
  • the RDMA message sent by the device processes the request message in the message and sends the processing result to the acceleration device.
  • the acceleration device receives the mount directory information sent by the NAS server.
  • the NAS server and the acceleration device use different network protocols for data transmission, and the acceleration device receives the network protocol packet sent by the NAS server and carries the mounted directory information, parses the packet, and obtains the mounted directory information.
  • the mount directory information includes a directory in which NAS data is stored in the NAS server.
  • the acceleration device mounts the directory in the mounted directory information in a local directory of the acceleration device.
  • the acceleration device generates a data structure of the local directory in the memory according to the mounted directory information, and calls a pointer function to mount the directory storing the NAS data in the directory information in the local directory of the acceleration device.
  • the NAS client sends a second request message to the acceleration device.
  • the second request message is used by the NAS client to request the acceleration device to send the directory storing the NAS data in the NAS server to the NAS client.
  • the NAS client may request the acceleration device to send the specified level according to the preset configuration.
  • Directory information for storing NAS data for example, when the preset configuration request is initialized, only the acceleration device is requested to send the first-level root directory information for storing the NAS data; or when the preset configuration request is initialized, the acceleration device is requested to store all the directories storing the NAS data. Information is sent to the NAS client.
  • the NAS client and the acceleration device can select the same level directory to be mounted in the local directory in the directory information storing the NAS data, or can store the NAS data in the directory according to the user's operation authority and the like according to the preset configuration. In the message, select a different level of directory to mount to the local directory.
  • the NAS client and the acceleration device perform data transmission through the DMA controller, that is, the NAS client converts the data to be sent into a format of a preset file system type description, and generates a direct memory access file system (Direct Memory). Access File System (DMAFS) message, and inform the DMA controller to send the message to the acceleration device, wherein the DMA controller can be implemented by the acceleration device, and the NAS client needs to send the DMAFS message generated by the NAS client to the acceleration.
  • DMA controller direct memory access file system
  • the processor of the NAS client notifies the DMA controller to send the DMAFS message generated by the NAS client to the acceleration device by using an instruction (such as a PCIe instruction); when the acceleration device needs to send the DMAFS message generated by the acceleration device to the NAS client, The acceleration device processor notifies the DMA controller to send the DMAFS message generated by the acceleration device to the NAS client. This moves the NAS client to the NAS protocol processing process to the acceleration device, reducing the load on the CPU and memory in the NAS client.
  • an instruction such as a PCIe instruction
  • the function of the DMA controller can also be implemented by the NAS client.
  • the processor of the NAS client is notified by an instruction (such as a PCIe instruction).
  • the DMA controller sends the DMAFS message generated by the NAS client to the acceleration device.
  • the acceleration device processor notifies the DMA controller to send the acceleration device to the NAS client. Generated DMAFS message.
  • the default file system type is used to describe the format of the DMAFS packet.
  • the default file system type can be implemented by running the DMAFS on the NAS client and docking with the Virtual File System (VFS) layer.
  • the DMAFS includes a specific function for the data request message to perform corresponding operations.
  • the data request message can be converted into a format of a preset file system type description, for example, a function corresponding to the write operation, a function corresponding to the read operation, and a corresponding directory creation directory.
  • the function, the function corresponding to the deletion operation, the file offset function, the specific function is not limited in this embodiment of the present invention, and can be seen in the prior art.
  • the specific function corresponding to each operation in the operation is not limited in this embodiment of the present invention, and can be seen in the prior art.
  • a file system type and four object structures are defined in a preset file system DMAFS, wherein the object structure includes a super block object, an inode object, a directory item object (dentry), a file object, and a file system.
  • the type is used to define various functions used by the file system from the system level, and the reference relationship between the functions;
  • the super block object is used to manage the current file system, including the total number of indexes, the total number of blocks, the allocation of inode usage, etc.
  • the inode mainly indicates the storage space of the file or directory and the corresponding file operations, such as file name change, link file creation, file permission modification, etc.;
  • the file object mainly indicates the operation of the opened file and directory, such as file content read and write.
  • Etc. an inode object that records the index relationship of directories and files in the file system
  • the directory entry object is mainly used to cache directory information for quick access to files and directories.
  • the request message is processed in the data processing process by using a file system defined function to output a predefined file format.
  • the DMAFS format message is as shown in FIG. 3A.
  • the DMAFS format message includes a request number and DMAFS data, and the request number is used to identify the number of the request processed by the DMAFS system; the DMAFS data includes the operation object and the user request.
  • Parameters, user request execution status, and data, wherein the parameters requested by the user include the type of the user request (eg, read-only, write-only), the length of the read-write data, the offset of the read-write data, and the execution status of the user request.
  • the identification user requests the execution result (such as success or failure); the data is used to indicate the target data corresponding to the operation of the read request or the operation of the write request, for example, when the user request is a read request operation on the target data, the NAS client
  • the "data" field in the DMAFS message sent to the acceleration device is empty, and the "data” field stores the target data in the DMAFS message sent by the acceleration device to the NAS client; when the user request is a write request operation on the target data, the NAS
  • the "data” field in the DMAFS message sent by the client to the acceleration device stores the target data in the DMAFS message sent by the acceleration device to the NAS client. According to the "field is empty.
  • the DMAFS packet may further include a packet sequence number, a packet type, a user verification information, and a user verification information check value, where the message sequence number is used to identify each message sending sequence;
  • the type is used to identify the packet as a DMAFS packet;
  • the user authentication information is used to identify the authentication information of the access authority of the user of the NAS client, and the user verification information check value is used to verify the user authentication information.
  • the acceleration device sends, to the NAS client, the mount directory information storing the NAS data.
  • the mounting directory information is sent by the NAS server to the acceleration device in step S302. Mount directory information.
  • the preset file system is also run on the processor of the acceleration device for converting the data to be sent into a preset file system type.
  • the file system is the same as step S304, and is no longer Narration.
  • the NAS client mounts the directory in the mounted directory information in a local directory of the NAS client.
  • the NAS client generates a data structure of the local directory according to the mounted directory information, and sequentially calls the pointer function to mount the directory storing the NAS data in the directory information in the local directory of the NAS client.
  • the NAS client, the acceleration device, and the NAS server complete the initialization process, and the directory information storing the NAS data is mounted in the local directory, so as to facilitate subsequent NAS data access.
  • FIG. 4 is a schematic flowchart of a NAS data access method according to the present invention. As shown in the figure, the method includes:
  • the NAS client receives the access request message, and determines an operation object according to the information of the target data to be accessed carried in the access request message.
  • the access request message carries the information and the operation type of the target data to be accessed
  • the NAS client determines the operation object according to the information of the target data to be accessed carried in the access request message, where the operation object includes the directory to which the target data belongs and / or file.
  • the NAS client receives the target data information in the user's access request message as a character string, and the file system can identify the index information of the file and the directory. Therefore, after receiving the user's access request message, the NAS The client uses the function of the preset file system to determine the operation object according to the information of the target data to be accessed, that is, the directory and/or file to which the target data to be accessed belongs.
  • the NAS client uses the read function in the DMAFS to sequentially execute the following instructions: first read the directory included in the Root directory and File information, read the directory and file information contained in the Root/Dir_a directory, and finally determine the File_b file in the Root/Dir_a directory. At this time, the NAS client will string information of the target data in the user's access request message. Convert to file and directory information that NFS recognizes.
  • the NAS client receives After the user accesses the request message, the data is forwarded through the VFS layer.
  • the role of the VFS is to provide a unified operation interface and application programming interface for various file systems. It is a glue layer that allows read and write requests to work without concern for the underlying storage medium and file system type. That is, the VFS layer mainly performs forwarding of access request messages, and does not process access request messages.
  • the NAS client generates a first DMAFS packet according to a preset file system type.
  • the preset file system type is used to describe the format of the DMAFS message, and the first DMAFS message includes the operation object and the operation type.
  • the NAS client sends the first DMAFS packet to the acceleration device.
  • the acceleration device acquires an operation type and an operation object in the first DMAFS message.
  • the acceleration device converts the operation object and the operation type into network file system NFS data, and encapsulates the NFS data into a network protocol message.
  • the protocol processing process in the prior art is still used between the acceleration device and the NAS server, that is, the operation type and the operation object sent by the acceleration device are obtained by the acceleration device.
  • the data conversion process of the NFS layer is first performed, such as obtaining the relevant parameters of the operation object and the operation type (such as the address of the received data), storing the parameter information to the associated data structure, and then encapsulating the NFS data into a network protocol message.
  • the network protocol may be TCP/IP, or UDP/IP, or RDMA.
  • the acceleration device sends a network protocol packet to the NAS server.
  • the network protocol packet may be a TCP/IP packet in step S405 or a UDP/IP packet or an RDMA packet according to different network protocols.
  • the NAS server After receiving the network protocol packet, the NAS server performs a read request operation or a write request operation on the target data according to the operation object and the operation type carried in the network protocol message, and performs operation on the target data. The result is returned to the acceleration device, which then returns the result of the operation to the NAS client.
  • Figure 5 is a schematic diagram of a simplified protocol stack based on the TCP/IP network protocol, compared with Figure 1B. It can be seen that for the NAS client, since the data access request does not need to undergo the processing of NFS, XDR, RPC, TCP/IP, but is directly sent to the acceleration device by the DMA method, the acceleration device further completes the protocol processing of other parts, so Compared with the protocol stack provided by the prior art, the embodiment of the present invention can uninstall the protocol processing process of the NAS client in the prior art by using the acceleration device, which can simplify the protocol processing process of the NAS client, thereby reducing the CPU load of the NAS client. .
  • the DMA server is deployed on the acceleration device side, and the DMA client is deployed on the NAS client side.
  • the DMA controller is implemented by the NAS client
  • the DMA server is on the NAS client side and the DMA client is on the acceleration device side.
  • the NAS client is offloaded to the acceleration device by using the protocol processing process in the prior art.
  • the acceleration device completes the NFS to UDP/IP, or NFS to RDMA conversion process, which also simplifies the protocol processing of the NAS client, thereby reducing the CPU load of the NAS client.
  • the acceleration device completes the protocol processing process from the NFS layer in the existing protocol stack, thereby solving the problem that the NAS client is processed due to heavy protocol processing.
  • the problem of high CPU load, high memory usage, and extended processing time improves the overall performance and data access efficiency of the NAS client.
  • the DMA engine is used for data transmission between the NAS client and the acceleration device. During the DMA data transfer process, the CPU does not participate in the work, thereby greatly reducing the CPU resource occupancy rate, thereby improving the CPU efficiency and reducing the CPU efficiency. The delay of NAS data access.
  • the acceleration device may further include a data buffer area, which is used as a buffer area of the NFS, and solves the problem that the Cache capacity of the NFS in the prior art is small, the hit rate is low, and the delay is high.
  • a data buffer area which is used as a buffer area of the NFS, and solves the problem that the Cache capacity of the NFS in the prior art is small, the hit rate is low, and the delay is high.
  • FIG. 6A is a schematic flowchart of a method for accessing a data of a NAS with a read request type according to the present invention. As shown in the figure, the method includes:
  • the NAS client receives the access request message, and determines an operation object according to the information of the target data to be accessed carried in the access request message.
  • the NAS client generates a first DMAFS packet according to a preset file system type.
  • the NAS client sends the first DMAFS packet to the acceleration device.
  • the acceleration device acquires an operation type and an operation object in the first DMAFS message.
  • Steps S601 to S604 are the same as the processing of steps S401 to S404, and details are not described herein again.
  • the acceleration device acquires the target data and the directory and/or file to which the target data belongs in the data buffer.
  • the data cache area of the acceleration device may store the accessed NAS data and the history data of the directory and/or file to which the data belongs.
  • the data buffer of the acceleration device may be used. Direct acquisition, thereby improving access efficiency of NAS data, shortening the delay of data access, and executing step S609.
  • step S606 to step S609 are performed.
  • the capacity of the data stored in the data buffer of the acceleration device may be controlled by a preset threshold.
  • the acceleration device may delete the history of the first stored specified capacity according to the preset configuration. data.
  • the acceleration device uses the method as shown in step S405 and step S406 to carry the network protocol of the operation type and the operation object according to the protocol processing of the prior art.
  • the message is sent to the NAS server.
  • the NAS server sends an operation result to the target data to the acceleration device.
  • the NAS server parses the packet, and performs an operation on the target data according to the operation type and the operation object carried in the packet.
  • the operation result is encapsulated into a network protocol message and sent to the acceleration device, where the operation result includes the target data and the directory and/or file to which the target data belongs.
  • the encapsulation and parsing process of the network protocol packet for transmitting data between the NAS server and the acceleration device in the step S606 and the step S607 is the same as the steps S301 and S302, and details are not described herein.
  • the acceleration device stores the operation result in the data buffer area.
  • the acceleration device generates a second DMAFS packet according to the preset file system type.
  • the second DMAFS packet includes the operation result, that is, the second DMAFS packet includes the directory and/or the file to which the target data and the target data belong, and the second DMAFS packet generation process is the same as step S402. Let me repeat.
  • the acceleration device sends a second DMAFS packet to the NAS client.
  • the acceleration device updates the local directory information according to the operation operation result, and the update process is the same as the method described in step S303, and details are not described herein again.
  • the NAS client also updates the local directory information according to the operation result, and the update process is the same as that described in step S306, and details are not described herein again.
  • FIG. 6B is a schematic flowchart of a NAS data access method when the operation type is a write request. As shown in the figure, the method includes:
  • the NAS client receives the access request message, and determines an operation object according to the information of the target data to be accessed carried in the access request message.
  • the NAS client generates a first DMAFS packet according to a preset file system type.
  • the NAS client sends the first DMAFS packet to the acceleration device.
  • the acceleration device acquires an operation type and an operation object in the first DMAFS message.
  • steps S611 to S614 is the same as the processing of steps S401 to S404, and details are not described herein again.
  • the acceleration device When the target data exists in the data buffer, the acceleration device performs a write request operation on the target data.
  • the acceleration device When the target data exists in the data buffer, the acceleration device performs a write request operation on the target data according to the type of operation.
  • the acceleration device sends an operation type and an operation object to the NAS server.
  • the stored data is stored in the data buffer of the acceleration device.
  • the acceleration device needs to send the operation type and the operation object to the NAS after modifying the data in the data buffer.
  • the NAS server performs a write operation on the stored target data.
  • the acceleration device receives response information of a write request operation on the target data sent by the NAS server.
  • the NAS server after performing the write request operation on the target data, the NAS server sends a response information of the write request operation to the acceleration device, where the response information is used to indicate whether the target data write operation is successful.
  • step S617 the NAS server sends a response message to the acceleration device by using the network protocol packet, and the specific process is the same as that in step S301, and details are not described herein again.
  • step S618 When the target data does not exist in the data buffer, the operation process of step S618 is performed.
  • the acceleration device when the target data does not exist in the data buffer, the acceleration device sends the operation type and the operation object to the NAS server.
  • the acceleration device receives the response information that the NAS server sends a write operation to the target data.
  • the acceleration device stores the target data in the data buffer.
  • the acceleration device stores the target data in the data buffer, that is, the target data and the target data belong to The directory and/or file information is stored in the data buffer so that when the subsequent operations of performing the read request are processed, the target data can be quickly searched from the data buffer to improve the performance of the read data.
  • the acceleration device generates a second DMAFS packet according to the preset file system type.
  • the acceleration device sends the second DMAFS packet to the NAS client.
  • steps S621 to S622 is the same as that of step S609 to step S610, and details are not described herein again.
  • the acceleration device updates the local directory information according to the operation operation result, and the update process is the same as the method described in step S303, and details are not described herein again.
  • the NAS client also updates the local directory information according to the operation result, and the update process is the same as that described in step S306, and details are not described herein again.
  • the unloading process of the NAS client protocol by the acceleration device solves the problem that the NAS client CPU load and memory usage are high and the processing is prolonged during the NAS data access process in the prior art. Further, by using the data buffer of the acceleration device to store the historical data, the access delay during the read process can be reduced, and the read processing efficiency of the NAS data access is improved. Compared with the limitation of the cache area of the NAS client in the media industry in the prior art, the Cache capacity associated with the network file system is small, resulting in low hit rate and high data access latency.
  • the Cache associated with the network file system is moved to the acceleration device by the acceleration device, and the access request from the VFS is not cached by the NAS client, thereby solving the problem that the Cache capacity associated with the network file system associated with the prior art is small, resulting in a low hit rate. And the problem of high latency of data access, improve the processing efficiency of NAS data access.
  • the size of the sequence numbers of the above processes does not mean the order of execution, and the order of execution of each process should be determined by its function and internal logic, and should not be The implementation process of the embodiments of the present invention constitutes any limitation.
  • the method for switching service links in a storage system is described in detail above with reference to FIG. 2A to FIG. 6B.
  • the NAS data provided according to the embodiment of the present invention will be described below with reference to FIG. 7 to FIG. Access to NAS clients and acceleration devices.
  • FIG. 7 is a schematic diagram of a NAS client 700 provided by the present invention.
  • the NAS client 700 includes a receiving unit 701, a processing unit 702, and a sending unit 703.
  • the receiving unit 701 is configured to receive an access request message of the user.
  • the processing unit 702 is configured to determine an operation object according to the information of the target data to be accessed carried in the access request message received by the receiving unit 701, where the operation object includes a directory to which the target data belongs and/or And generating a first DMAFS packet according to a format described by the preset file system type, where the preset file system type is used to describe a format of the DMAFS packet, and the first DMAFS packet includes the operation object And the type of operation carried in the access request message.
  • the sending unit 703 is configured to send the first DMAFS packet to the acceleration device.
  • the sending unit 703 is further configured to: before the NAS client receives the access request message, send a third DMAFS packet to the acceleration device, where the third DMA packet is used for the acceleration
  • the device requests a directory to store NAS data.
  • the receiving unit 701 is further configured to receive the mount directory information sent by the acceleration device, and mount the directory that stores the NAS data carried in the mount directory information to a local directory.
  • the receiving unit 701 is further configured to send, by the acceleration device, a second DMAFS packet, where the second DMAFS packet carries an operation result for the first DMAFS packet,
  • the operation result includes the target data and a directory and/or file to which the target data belongs.
  • the processing unit 702 is further configured to update the local directory of the NAS client according to the directory and/or file information to which the target data in the operation result belongs.
  • the NAS client 700 in accordance with an embodiment of the present invention may correspond to performing the methods described in the embodiments of the present invention, and that the above and other operations and/or functions of the various units in the NAS client 700 are respectively implemented in order to implement FIG. 2A. Corresponding processes of the respective methods in FIG. 6B are not described herein for brevity.
  • the acceleration device by adding an acceleration device to the NAS client, completes the protocol processing process from the NFS layer in the existing protocol stack, and solves the high CPU load caused by the heavy protocol processing of the NAS client.
  • the problem of high memory usage and extended processing time improves the overall performance and data access efficiency of the NAS client.
  • FIG. 8 is a schematic diagram showing the hardware structure of a NAS client 800.
  • the NAS client 800 includes one or more (only one shown) processor 801, memory 802, and communication bus 805.
  • processor 801 processor 801
  • memory 802 memory 802
  • FIG. 8 is merely illustrative and does not limit the structure of the NAS client 800.
  • NAS client 800 may also include more or fewer components than shown in FIG. 8, or have a different configuration than that shown in FIG.
  • the processor 801 and the memory 802 are connected and communicate with each other through a communication bus 805 for storing computer execution instructions.
  • the processor 801 executes the memory.
  • the computer in execution executes instructions to utilize the hardware resources in the NAS client 800 to perform the following operations:
  • the communication bus 805 is used for communication between the components in the NAS client 800.
  • the processor 801 performs various functional applications and data processing by running software programs stored in the memory 802 and modules (such as the virtual file system 8011, direct memory access file system 8012), for example, the processor 801 calls the memory 802.
  • the program instruction for encapsulating the operation type and the operation object encapsulates the operation result of the target data into a message of a DMAFS format.
  • the processor 801 may be a CPU, and the processor 801 may also be other general-purpose processors, digital signal processors (DSPs), ARM processors, application specific integrated circuits (ASICs), and ready-made devices.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGA Programmable Gate Array
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the NAS client 800 provided by the embodiment of the present invention further includes a DMA controller 806, which is integrated on the hardware board of the NAS client 800, and the access interface of the DMA controller 806 runs on the processor. Direct memory access file system, network file system docking.
  • the DMA controller 806 is capable of implementing data transfer between the DMA controller 806 and the acceleration device via the PCIe bus under the control of the processor, that is, the DMA controller 806 can generate DMAFS messages generated by the NAS client. Move from the NAS client to the acceleration device, or move the DMAFS message generated by the acceleration device from the acceleration device to the NAS client without going through the operation of the processor 801, so that the processing speed of the computer system is accelerated, and the data transmission is effectively improved. Performance.
  • the functionality of the DMA controller can also be implemented by the processor 801 of the NAS client 800.
  • the memory 802 can be used to store a software program, a module, and a database.
  • the processor 801 sends a program instruction/module corresponding to the operation type and the operation object to the DMA controller 806, and generates a DMAFS report generated by the NAS client.
  • the text is moved to the acceleration device, or the DMAFS message generated by the acceleration device is moved to the NAS client.
  • Memory 802 can include high speed random access memory and can also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
  • memory 802 can further include memory remotely located relative to processor 801 that can be connected to NAS client 800 over a network.
  • the memory 802 can include read only memory and random access memory and provides instructions and data to the processor 801. A portion of the memory 802 may also include a non-volatile random access memory.
  • the memory 802 can also store information of the device type.
  • the NAS client 800 may further include a user interface 803 for plugging in an external device.
  • the user interface 803 includes the PCIe interface or the high-speed peripheral interface of FIG. 2A to FIG. 2D. It can also be used to connect devices such as touch screens, mice, and keyboards to receive information entered by users.
  • the network interface 804 is used for the NAS client 800 to communicate with each other.
  • the network interface 804 mainly includes a wired interface and a wireless interface, such as a network card, an RS232 module, a radio frequency module, a WIFI module, and the like.
  • the structure shown in FIG. 8 is merely illustrative and does not limit the structure of the acceleration device.
  • the NAS client 800 may also include more or fewer components than those shown in FIG. 8, or have a different configuration than that shown in FIG. 8.
  • the NAS client 800 may not include the memory 802, and the memory 802 is Device implementation outside the NAS client.
  • the NAS client 800 corresponds to the NAS client 700 provided by the embodiment of the present invention.
  • the NAS client 800 is used to implement the corresponding process of the NAS client in the method shown in FIG. 2A to FIG. 6B. Concise, no longer repeat here.
  • the unloading process of the NAS client protocol by the acceleration device solves the problem that the NAS client CPU load and memory usage are high and the processing is prolonged during the NAS data access process in the prior art. Further, by using the data buffer of the acceleration device to store the historical data, the access delay during the read process can be reduced, and the read processing efficiency of the NAS data access is improved. Compared with the limitation of the cache area associated with the NAS client in the streaming media industry in the prior art, the Cache capacity associated with the network file system is small, resulting in low hit rate and high data access latency, and the network file system is associated by the acceleration device.
  • the cache is moved to the acceleration device, and the access request from the VFS requests the NAS client to not cache, thereby solving the problem that the cache capacity associated with the network file system associated with the prior art is small, resulting in low hit rate and high data access delay. Improve the processing efficiency of NAS data access.
  • FIG. 9 is a schematic diagram of an acceleration device 900 according to the present invention. As shown in the figure, the acceleration device 900 includes a receiving unit 901, a processing unit 902, and a sending unit 903:
  • the receiving unit 901 is configured to receive a first DMAFS packet sent by the NAS client, where the first DMAFS packet carries the operation object and the operation type.
  • the processing unit 902 is configured to acquire the operation object and the operation type in the first DMAFS packet, convert the operation object and the operation type into network file system NFS data, and NFS data is encapsulated into network protocol packets.
  • the sending unit 903 is configured to send the network protocol packet to the NAS server.
  • the network protocol includes any one of the following protocols: TCP/IP, UDP/IP, and RDMA.
  • the protocol processing process of the NAS client is uninstalled by the acceleration device 900, the load of the CPU and the memory of the NAS client is reduced, and the NAS client of the acceleration device 900 transmits data by using DMA packets, thereby reducing the processing delay. Improve the efficiency of the entire NAS data access process.
  • the processing unit 902 is further configured to: when the network protocol is TCP/IP, encapsulate the NFS data with the first external data identifier XDR packet; and encapsulate the first XDR packet into the first remote The process invokes the RPC message; and encapsulates the first RPC message into a first TCP/IP message.
  • the processing unit 902 is further configured to: when the network protocol is UDP/IP, encapsulate the NFS data with the first external data identifier XDR packet; and encapsulate the first XDR packet into the first remote The process invokes the RPC message; and encapsulates the first RPC message into a first UDP/IP message.
  • the processing unit 902 is further configured to: when the network protocol is RDMA, encapsulate the NFS data with the first external data identifier XDR packet; and encapsulate the first XDR packet into the first remote procedure call An RPC message; and encapsulating the first RPC message as a first RDMA message.
  • the sending unit 903 is further configured to: before the receiving unit 901 receives the access request message, send a first request message to the NAS server, where the first request message is used to serve the NAS
  • the terminal requests a directory to store NAS data.
  • the receiving unit 901 is further configured to receive the mounting directory information sent by the NAS server, where the mounting directory information includes information about a directory in which the NAS data is stored in the NAS server.
  • the processing unit 902 is further configured to: mount, according to the mounted directory information, a directory in the NAS server that stores the NAS data into a local directory.
  • the receiving unit 901 is further configured to receive a third DMAFS message, where the third DMAFS message is used by the NAS client to request the directory of the storage NAS data from the acceleration device.
  • the sending unit 903 is further configured to send the mount directory information to the NAS client.
  • the processing unit 902 is further configured to: when the data buffer exists At the time of the target data, an operation is performed on the target data according to the operation object and the operation type.
  • the sending unit 903 is further configured to send an operation result of the target data to the NAS client.
  • the processing unit 902 is further configured to: when the operation type is a read request, acquire the target data in the data buffer and a directory and/or a file to which the target data belongs.
  • the sending unit 903 is further configured to send, to the NAS client, the target data and a directory and/or a file to which the target data belongs.
  • the processing unit 902 is further configured to: when the operation type is a write request, acquire the target data, and perform an operation of the write request on the target data.
  • the sending unit 903 is further configured to send the operation object and the operation type to the NAS server, and send the response information of the write operation to the NAS client.
  • the receiving unit 901 is further configured to receive response information of a write operation of the target data by the NAS server, where the response information of the write operation is used to indicate whether the target data write operation is successful.
  • the sending unit 903 is further configured to: when the target data does not exist in the data buffer, send the operation object and the operation type to the NAS server.
  • the receiving unit 901 is configured to receive an operation result of the target data sent by the NAS server.
  • the sending unit 903 is further configured to: when the operation type is a read request, send the operation object and the operation type to the NAS server.
  • the receiving unit 901 is further configured to receive an operation result of the read request for the target data sent by the NAS server, where the operation result of the read request includes the target data and a directory to which the target data belongs / or file.
  • the processing unit 902 is further configured to store the operation result in the data buffer area.
  • the sending unit 903 is further configured to send the operation result to the NAS client.
  • the sending unit 903 is further configured to: when the operation type is a write request, send the operation object and the operation type to the NAS server; send a write to the NAS client. Response information for the operation.
  • the receiving unit 901 is further configured to receive and receive the target sent by the NAS server.
  • the response information of the write request operation result of the data.
  • the receiving unit 901 is further configured to receive, by the NAS server, a network protocol packet that carries an operation result of the target data, where the operation result includes the target data and the target The directory and/or file to which the data belongs.
  • the processing unit 902 is further configured to generate a second DMAFS packet according to the preset file system type, where the second DMAFS packet includes the operation result.
  • the sending unit 903 is further configured to send a second DMAFS packet to the NAS client.
  • the processing unit 902 is further configured to update a local directory of the acceleration device according to the directory and/or file information to which the target data in the operation result belongs.
  • the acceleration device 900 by using the data buffer of the acceleration device to store historical data, the access delay during the read process can be reduced, and the read processing efficiency of the NAS data access can be improved.
  • the Cache capacity associated with the network file system is small, resulting in low hit rate and high data access latency, and the network file system is accelerated by the device.
  • the associated Cache is moved to the acceleration device, and the access request from the VFS is not cached by the NAS client, which reduces the delay of NAS data processing to some extent.
  • FIG. 10 is a schematic diagram of an acceleration device 1000 according to an embodiment of the present invention.
  • the acceleration device 1000 includes a processor 1001, a memory 1002, a user interface 1003, a network interface 1004, and a communication bus 1005.
  • the acceleration device 1000 is connected to the NAS client through the user interface 1003, and is connected to the NAS server through the network interface 1004.
  • the processor 1001, the memory 1002, the user interface 1003, and the network interface 1004 communicate through the communication bus 1005. Communication can be achieved by other means such as wireless transmission.
  • the memory 1002 is for storing instructions for executing the instructions stored by the memory 1002.
  • the memory 1002 stores program code, and the processor 1001 can call the program code stored in the memory 1002 to perform the following operations:
  • the network protocol includes any one of the following protocols: a transmission control protocol/ Internet Protocol TCP/IP, User Datagram Protocol/Internet Protocol UDP/IP, Remote Direct Data Access RDMA.
  • the structure shown in FIG. 10 is merely illustrative and does not limit the structure of the acceleration device.
  • the acceleration device 1000 may also include more or less components than those shown in FIG. 10, or have a different configuration than that shown in FIG.
  • Communication bus 1005 is used to accelerate communication between various components in the device.
  • the communication bus 1005 may include a power bus, a control bus, a status signal bus, and the like in addition to the data bus.
  • various buses are labeled as communication bus 1005 in the figure.
  • the user interface 1003 is used to plug in an external device.
  • the user interface 1003 includes the PCIe interface or the high speed peripheral interface of FIGS. 2A-2D.
  • the network interface 304 is used for communication between the NAS client and the outside.
  • the network interface 1004 mainly includes a wired interface and a wireless interface, such as a network card, an RS232 module, a radio frequency module, a WIFI module, and the like.
  • the processor 1001 executes various functional applications and data processing by running software programs and modules (such as direct memory access file system 10012, network file system 10013) stored in the memory 1002, for example, the processor 1001 calls the memory 1002.
  • the program instruction for encapsulating the operation result of the target data encapsulates the operation result of the target data into a message directly accessing the remote data DMA format.
  • the processor 1001 may be a CPU, and the processor 1001 may also be other general-purpose processors, digital signal processors (DSPs), ARM processors, application specific integrated circuits (ASICs), and ready-made devices.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGA Programmable Gate Array
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the acceleration device 1000 provided by the embodiment of the present invention further includes a DMA controller 1006.
  • the DMA controller 1006 is integrated on the hardware board of the acceleration device, and the access interface of the DMA controller 1006 and the direct memory access running on the processor File system, network file system docking.
  • the DMA controller 1006 is capable of realizing data transmission between the DMA controller 1006 and the acceleration device through the PCIe bus under the control of the processor, that is, the DMA controller can relocate the DMAFS message of the NAS client to the acceleration.
  • the device or the DMAFS message generated by the acceleration device is relocated to the NAS client without the operation of the processor 1001, so that the processing speed of the computer system is accelerated, and the performance of the data transmission is effectively improved.
  • the DMA controller 1006 can also be implemented by the processor 1001.
  • the memory 1002 can be used to store a software program and a module.
  • the processor 1001 sends a program instruction/module corresponding to the operation result of the target data to the DMA controller 1006, and stores a history of the processed NAS access request. data.
  • Memory 1002 can include high speed random access memory, and can also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
  • memory 1002 can further include memory remotely located relative to processor 1001 that can be coupled to the acceleration device over a network.
  • the memory 1002 can include read only memory and random access memory and provides instructions and data to the processor 1001. A portion of the memory 1002 may also include a non-volatile random access memory.
  • the acceleration device 1000 corresponds to the acceleration device 900 provided by the embodiment of the present invention, and the acceleration device 1000 is used to implement the method shown in FIG. 2A to FIG. 6B according to the embodiment of the present invention. For the sake of brevity, it will not be repeated here.
  • the present invention provides a NAS data access system, which includes the NAS client and the acceleration device provided by the foregoing embodiment, and the acceleration device includes a first interface and a second interface, and accelerates.
  • the device is connected to the NAS client through the first interface, and is connected to the NAS server through the second interface.
  • the NAS client receives the user request message, and first determines the operation object by using the information of the target data to be accessed carried in the access request message, where the operation object includes the directory and/or file to which the target data belongs; and then according to the preset file system type.
  • the acceleration device is further Sending the first DMAFS message.
  • the operation type and the operation object in the first DMAFS packet are obtained by the acceleration device, and the processing procedure from the NFS to the network protocol packet in the prior art is continued, and the network protocol packet carrying the operation type and the operation object is further processed.
  • Send to the NAS server This solves the problem of excessive CPU and memory load caused by the NAS client in the process of processing the NAS protocol in the prior art.
  • the protocol processing process of the NAS client in the prior art is uninstalled by using the acceleration device, which reduces the processing delay of the NAS client, and also reduces the process caused by the heavy protocol processing.
  • the NAS client CPU and memory load improve the processing efficiency of the entire NAS data access system and reduce the processing delay.
  • the pre-defined file system is compatible with the processing process of the NAS protocol stack in the prior art, and can be reasonably applied to the prior art, effectively reducing the load on the CPU and memory of the NAS client, and processing the delay.
  • the disclosed systems, devices, and methods may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the functions may be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a standalone product.
  • the technical solution of the present invention which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including
  • the instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本发明实施例涉及一种NAS数据访问的方法、***及相关设备。该方法应用于NAS***中,NAS客户端接收访问请求消息,根据访问请求消息中携带的待访问的目标数据的信息确定操作对象,操作对象中包括目标数据所归属的目录和/或文件;根据预设的文件***类型描述的格式,生成第一直接内存访问文件***DMAFS报文,预设的文件***类型用于描述DMAFS的格式;向加速装置发送第一DMAFS报文,以便所述加速装置将第一DMAFS报文中的操作对象和操作类型转换为网络文件***NFS数据以及将NFS数据封装为网络传输协议报文发送给NAS服务端。由此降低NAS客户端的CPU和内存负载和NAS***的处理时延。

Description

一种NAS数据访问的方法、***及相关设备 技术领域
本发明涉及存储领域,尤其涉及一种NAS数据访问的方法、***、加速装置及NAS客户端。
背景技术
网络附加存储(Network Attached Storage,NAS)是一种将分布、独立的数据整合为大型、集中化管理的数据中心,以客户端/服务端(Client/Server,C/S)模式工作的存储网络共享***,由NAS服务端为NAS客户端提供基于网络的文件共享服务,无需应用服务器(Application Server,AS)的干预,允许用户在网络上存取数据,可提供跨平台文件共享功能,以便于对不同主机和应用服务器进行访问的技术,在企业数据中心中得到越来越广泛得应用。
现有技术中,如图1A所示,NAS客户端和NAS服务端之间通过网卡接口相连,并基于网络共享***(Common Internet File System,CIFS)或网络文件***(Network File System,NFS)进行通信,其中,CIFS为微软定义的应用逻辑,主要应用在Windows的操作***的NAS客户端和NAS服务端之间,NFS为Linux和Unix定义的应用逻辑,主要应用在Linux操作***的NAS客户端和NAS服务端之间。如图1B所示,NAS的协议栈包括:当NAS客户端的应用软件需要访问NAS服务端的NAS数据时,在NAS客户端内部先将访问请求消息发送给虚拟文件***(Virtual File System,VFS),VFS再将请求消息转发给NFS,NFS将请求消息经过外部数据表示(External Data Representation,XDR)转换,然后发送给远程过程调用(Remote Procedure Call,RPC)模块;RPC模块选择TCP/IP、UDP/IP或者RDMA(后两种协议在图上没有描述)等网络协议,如PRC选择TCP/IP协议时,则需经过开放网络计算(Open Network Computing,ONC)和传输控制协议(Transfer Control Protocol,TCP)PRC等网络协议处理,再通过底层的硬件设备(如网卡)及驱动(如网卡驱动),将请求发送到NAS的服务端。而NAS服务端经过与之类似且相反的流程接收NAS客户端的请求,并将相应的信息回复给NAS客户端。在媒资行业等NAS客户端硬件资源有限的场景中,上述厚重的协议处理 过程导致NAS客户端的CPU负载高、内存占用多、时延不理想的问题,影响NAS客户端的整体性能和数据访问效率。
发明内容
本发明提供了一种NAS数据访问的方法、***及相关设备,能够解决现有技术中NAS数据访问过程中存在的NAS客户端的CPU负载高、内存占用多、处理时延长的问题,以此提高NAS客户端的整体性能和数据访问效率。
第一方面,提供了一种NAS数据访问的方法,该方法应用在NAS数据访问的***中,该***中包括NAS客户端和加速装置,加速装置包括第一接口和第二接口,加速装置通过第一接口连接NAS客户端,通过第二接口与NAS服务端相连。首先,NAS客户端接收访问请求消息,根据该访问请求消息中携带的待访问的目标数据的信息确定操作对象,即确定待访问的目标数据归属的目录和/或文件;然后,根据预设的文件***类型,生成第一直接内存访问DMAFS报文,并向所述加速装置发送第一DMAFS报文,由加速装置完成NAS协议栈中其他协议处理过程,其中,预设的文件***类型用于描述DMAFS报文的格式,第一DMAFS报文中包括操作对象和访问请求消息中携带的操作类型。例如,DMAFS报文中包括请求号、DMAFS数据,其中,DMAFS数据中包括操作对象、用户请求的参数、用户请求的执行状态和数据,以便于加速装置可以将第一DMAFS报文中的操作对象和操作类型转换为网络文件***NFS数据以及将NFS数据封装为网络传输协议报文发送给NAS服务端。由此对NAS客户端的协议处理过程进行卸载,降低NAS客户端的CPU和内存负载,及访问请求的处理时延,提升整个NAS数据访问***的处理效率。
在一种可能的实现方式中,第一接口为高速***组件互连PCIe接口或高速外设接口,第二接口为网卡接口。其中,高速外设接口可以为闪电接口。
在一种可能的实现方式中,NAS客户端接收加速装置发送的携带针对第一DMAFS报文的操作结果的第二DMAFS报文,该操作结果中包括第一DMAFS报文中待访问的目标数据和目标数据所归属的目录和/或文件。
在一种可能的实现方式中,NAS客户端在接收访问请求消息之前, 需要执行初始化流程:NAS客户端先向加速装置发送用于向加速装置请求存储NAS数据的目录的第三DMAFS报文;然后接收加速装置发送的挂载目录信息,并将挂载目录信息中携带的存储NAS数据的目录挂载到本地目录中。
在一种可能的实现方式中,NAS客户端根据操作结果中的目标数据所归属的目录和/或文件信息更新NAS客户端的本地目录。
通过上述方法的描述,NAS客户端在接收到用户访问请求消息后,将该访问请求消息转换为预设文件***格式的DMAFS报文,并发送给加速装置,由加速装置完成现有技术中NAS协议栈的协议处理过程,由此降低NAS客户端的CPU和内存负载,提升整个NAS数据访问***的处理效率。
第二方面,提供了一种NAS数据访问的方法,该方法应用在NAS数据访问的***中,该***中包括NAS客户端和加速装置,加速装置包括第一接口和第二接口,加速装置通过第一接口连接NAS客户端,通过第二接口与NAS服务端相连。首先,加速装置接收到NAS客户端的第一DMAFS报文,获取该报文中携带的对待访问的目标数据的操作对象和所述操作类型;然后,将操作对象和操作类型转换为网络文件***NFS数据,再将所述NFS数据封装为网络传输协议报文,并发送给NAS服务端,由此完成NAS数据的访问过程。
其中,网络传输协议可以为传输控制协议/因特网互联协议TCP/IP,或用户数据报协议/因特网互联协议UDP/IP,或远程直接数据存取RDMA。
通过上述方式,NAS客户端和加速装置之间以DMAFS报文进行数据传输,加速装置在接收到DMAFS报文后,进一步完成NAS协议栈的处理,最终将对待访问的目标数据的操作对象和操作类型信息以网络传输协议报文的形式发送给NAS服务端,以此减少NAS客户端的CPU和内存的负载,降低处理时延,提升整体NAS数据访问***的处理效率。
在一种可能的实现方式中,第一接口为高速***组件互连PCIe接口或高速外设接口,第二接口为网卡接口,其中,高速外设接口可以为闪电接口。
在一种可能的实现方式中,当网络协议为TCP/IP时,加速装置先将NFS数据封装为第一外部数据标识XDR报文;然后再将第一XDR报文封装为第一远程过程调用RPC报文;最后将第一RPC报文封装为第一 TCP/IP报文,并将第一TCP/IP报文发送给NAS服务端,以此,完成网络协议为TCP/IP时,加速装置和NAS客户端之间的数据传输。
在一种可能的实现方式中,当网络协议为UDP/IP时,加速装置先将所述NFS数据封装第一外部数据标识XDR报文;再将第一XDR报文封装为第一远程过程调用RPC报文;最后将第一RPC报文封装为第一UDP/IP报文,并将第一UDP/IP报文发送给NAS服务端,以此,完成网络协议为UDP/IP时,加速装置和NAS客户端之间的数据传输。
在一种可能的实现方式中,当网络协议为RDMA时,加速装置先将NFS数据封装为第一外部数据标识XDR报文;再将第一XDR报文封装为第一远程过程调用RPC报文;然后将第一RPC报文封装为第一RDMA报文,并将第一RDMA报文发送给NAS服务端,以此,完成网络协议为RDMA时,加速装置和NAS客户端之间的数据传输。
在一种可能的实现方式中,在加速装置接收第一DMAFS报文之前,加速装置需要完成数据的初始化过程,包括向NAS服务端发送第一请求消息,并接收NAS服务端发送的存储NAS数据的目录的挂载目录信息,然后根据该挂载目录信息将存储NAS数据的目录挂载到本地目录中,其中,第一请求消息用于向所述NAS服务端请求存储NAS数据的目录。
在一种可能的实现方式中,在加速装置完成初始化过程后,加速装置接收NAS客户端发送的第三DMAFS报文,第三DMAFS报文用于NAS客户端向加速装置请求存储NAS数据的目录;加速装置会将挂载目录信息发送给NAS客户端,以便于NAS客户端根据挂载目录信息将存储NAS数据的目录挂载到本地目录中。
在一种可能的实现方式中,NAS服务端接收携带操作对象和操作类型的网络报文后,会对目标数据执行读请求的操作或写请求的操作,并将操作结果发送给加速装置,操作结果中包括目标数据和目标数据所归属的目录和/或文件相应地,加速装置接收NAS服务端发送的携带对目标数据的操作结果的网络协议报文;然后,加速装置再根据预设的文件***类型,生成第二DMAFS报文,其中,预设的文件***类型用于描述DMAFS报文的格式,第二DMAFS报文中包括操作结果;再向所述NAS客户端发送第二DMAFS报文。
在一种可能的实现方式中,加速装置还包括数据缓存区,该数据缓 存区用于作为NFS的缓存区,存储已处理的访问请求消息的历史数据,例如,当用户执行读请求的操作或写请求的操作时,可以将读请求的操作或写请求的操作的目标数据存储在数据缓存区中,当有新的访问请求,且待访问的目标数据存储在数据缓存区中时,加速装置根据操作对象和操作类型对目标数据执行操作,并向NAS客户端发送对所述目标数据的操作结果。
可选地,可以通过预置阈值控制加速装置的数据缓存区中存储的数据的容量,当缓存区中容量达到预置阈值时,加速装置可以按照预置配置删除最先存储的指定容量的历史数据。
在一种可能的实现方式中,当加速装置的数据缓存区中存在目标数据且操作类型为读请求的操作时,加速装置可以获取数据缓存区中的目标数据和目标数据所归属的目录和/或文件;然后,向NAS客户端发送目标数据和目标数据所归属的目录和/或文件,以此提高读请求的操作的处理效率,降低读请求的操作的处理实现。
在一种可能的实现方式中,当加速装置的数据缓存区中存在目标数据且操作类型为写请求的操作时,加速装置先获取目标数据,并对目标数据执行写请求的操作;然后,向NAS服务端发送操作对象和操作类型;接收NAS服务端对目标数据的写请求的操作的响应信息,其中,写请求的操作的响应信息用于指示对所述目标数据写操作是否执行成功;再向NAS客户端发送写请求的操作的响应信息。
在一种可能的实现方式中,当数据缓存区中不存在目标数据时,加速装置先向NAS服务端发送操作对象和操作类型;然后,接收NAS服务端发送的对所述目标数据的操作结果。
在一种可能的实现方式中,当数据缓存区中不存在目标数据且操作类型为读请求,加速装置先向NAS服务端发送操作对象和操作类型;然后,接收NAS服务端发送的对目标数据的读请求的操作结果,其中,读请求的操作结果中包括目标数据和所述目标数据所归属的目录和/或文件;再将操作结果存储在所述数据缓存区,并向所述NAS客户端发送所述操作结果。
在一种可能的实现方式中,在数据缓存区中不存在目标数据且操作类型为写请求时,向NAS服务端发送操作对象和操作类型;接收NAS 服务端发送的对目标数据的写请求的操作结果的响应信息;将目标数据存储在数据缓存区;并向所述NAS客户端发送写操作的响应信息。
在一种可能的实现方式中,加速装置根据操作结果中的目标数据所归属的目录和/或文件信息更新加速装置的本地目录。
综上所述,加速装置接收NAS客户端的访问请求消息后,继续完成现有技术中协议处理过程,完成与NAS服务端的数据传输,由此减少NAS客户端的CPU负载和内存占用,及NAS数据访问的时延。进一步地,通过加速装置的数据缓存区,对已访问的历史数据进行缓存,提高NAS数据处理的效率,降低数据访问时延,提升整个NAS***的处理效率。
第三方面,本发明提供了一种NAS数据访问的***,该***包括NAS客户端和加速装置,所述加速装置包括第一接口与第二接口,所述加速装置通过所述第一接口连接所述NAS客户端,通过所述第二接口与NAS服务端相连,所述NAS客户端用于执行第一方面或第一方面任一种可能实现方式中操作步骤,所述加速装置用于执行第二方面或第二方面任一种可能实现方式中的操作步骤。
第四方面,本发明提供了一种NAS数据访问的NAS客户端,所述NAS客户端包括用于执行第一方面或第一方面任一种可能实现方式中的NAS数据访问方法的各个模块。
第五方面,本发明提供了一种NAS数据访问的加速装置,所述NAS客户端包括用于执行第二方面或第二方面任一种可能实现方式中的故障处理方法的各个模块。
第六方面,本发明提供一种NAS数据访问的NAS客户端,该NAS客户端包括处理器、存储器、通信总线,所述处理器、存储器之间通过通信总线连接并完成相互间的通信,所述存储器中用于存储计算机执行指令,所述NAS客户端运行时,所述处理器执行所述存储器中的计算机执行指令以利用所述NAS客户端中的硬件资源执行第一方面或第一方面任一种可能的实现方式中的方法。
第七方面,提供了一种计算机可读介质,用于存储计算机程序,该计算机程序包括用于执行第一方面或第一方面的任意可能的实现方式中的方法的指令。
第八方面,本发明提供一种NAS数据访问的加速装置,该加速装置包括 处理器、存储器、用户接口、网络接口、通信总线,加速装置通过用户接口连接NAS客户端,通过网络接口与NAS服务端相连,处理器、存储器、用户接口和网络接口之间通过通信总线连接并完成相互间的通信,所述存储器中用于存储计算机执行指令,所述加速装置运行时,所述处理器执行所述存储器中的计算机执行指令以利用所述加速装置中的硬件资源执行第二方面或第二方面任一种可能的实现方式中的方法。
第九方面,提供了一种计算机可读介质,用于存储计算机程序,该计算机程序包括用于执行第二方面或第二方面的任意可能的实现方式中的方法的指令。
基于上述技术方案,本发明实施例的NAS数据访问的方法、***和相关设备,通过加速装置完成现有技术中NAS客户端对NAS协议栈的协议处理过程,减少NAS客户端的CPU和内存的负载。加速装置和客户端之间采用PCIe或高速外设接口通信,以DMAFS报文进行数据传输,降低了处理时延。进一步的,利用加速装置的数据缓存区对历史数据进行缓存,在数据访问过程中可以提升读数据处理的效率,由此能够提高NAS客户端的整体性能和数据访问效率。
本申请在上述各方面提供的实现方式的基础上,还可以进行进一步组合以提供更多实现方式。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对本发明实施例中所需要使用的附图作简单地介绍,显而易见地,下面所描述的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1A为本发明提供的一种现有技术中NAS***架构的示意图;
图1B为本发明提供的一种现有技术中NAS***协议栈的示意图;
图2A为本发明提供的一种NAS***的硬件结构示意图;
图2B为本发明提供的另一种NAS***的硬件结构示意图;
图2C为本发明提供的另一种NAS***的硬件结构示意图;
图2D为本发明提供的另一种NAS***的硬件结构示意图;
图3为本发明提供的一种NAS数据访问方法的的初始化操作流程的示意 图;
图3A为本发明提供的一种DMAFS报文格式的示意图;
图4为本发明提供的另一种NAS数据访问方法的流程的示意图;
图5为本发明提供的一种NAS***协议栈的示意图;
图6A为本发明提供的一种NAS数据访问装置的示意图;
图6B为本发明提供的另一种NAS数据访问装置的示意图;
图7为本发明提供的一种NAS客户端的示意图;
图8为本发明提供的另一种NAS客户端的示意图;
图9为本发明提供的一种加速装置的示意图;
图10为本发明提供的另一种加速装置的示意图。
具体实施方式
下面结合附图进一步介绍本发明所提供的一种NAS数据访问方法。
图2A为本发明实施例提供的一种NAS数据访问***的逻辑框图,如图所示,该***中包括NAS客户端、加速装置,加速装置通过高速***组件互连(peripheral component interconnect express,PCIe)接口与NAS客户端相连,通过网卡与NAS服务端相连。其中,加速装置与NAS客户端分别属于PCIe拓扑结构中的一个端点设备,二者通过PCIe总线进行通信。
可选地,图2B为本发明提供的另一种NAS数据访问***的逻辑框图,如图所示,与图2A的区别在于,加速装置也可以不配置网卡,加速装置通过PCIe与NAS客户端相连后,利用NAS客户端的网卡与NAS服务端进行数据传输。
可选地,在图2B所示的NAS数据访问***逻辑框图中,也可以在NAS客户端的网卡中添加中央处理器(Central Process Unit,CPU),由该网卡作为加速装置,分别通过PCIe接口与NAS客户端相连,通过NAS客户端的网卡接口与NAS服务端相连。
图2C为本发明实施例提供的另一种NAS数据访问***的逻辑框图,如图所示,当NAS客户端为对机身有严格限制的客户端机器时,例如NAS客户端为MAC Pro,通用的PCIe接口卡较难***NAS客户端机内,此时,加速装置可以通过高速外设接口与NAS客户端连接,其中,高速外设接口如闪电(ThunderBolt)接口。加速装置在通过其自身的网卡与NAS服务端相通信。 如图2C所示,若高速外设接口为闪电接口,需要通过闪电接口芯片控制闪电接口之间的连接。
可选地,如图2D所示,加速装置中可以不配置网卡,在通过高速外设接口与NAS客户端相连后,通过NAS客户端的网卡与NAS服务端相通信。
值得说明的是,图2A至图2D中加速装置与NAS服务端连接的网络可以使用如图所示的以太网,也可以使用其他网络类型,如无损以太网数据中心桥接(Data Center Bridging,DCB),不构成对本发明的限制。
本发明通过添加如图2A至图2D所示的加速装置,利用加速装置完成NAS数据访问过程中的协议处理,由此简化NAS客户端的协议处理过程,进而减少NAS客户端的CPU和内存的损耗,接下来,结合图3进一步介绍本发明所提供的一种NAS数据访问方法,如图所示,在NAS客户端访问NAS数据之前,NAS客户端、加速装置和NAS服务端需要执行初始化操作,所述方法的初始化操作步骤包括:
S301、加速装置向NAS服务端发送第一请求消息。
具体地,第一请求消息用于向NAS服务端请求将存储NAS数据的目录发送给加速装置。
可选地,加速装置可以按照预置配置请求NAS服务端发送指定层级的目录信息,例如预置配置要求初始化时,仅请求NAS服务端发送存储NAS数据的一级根目录信息;或预置配置要求初始化时,请求NAS服务端将所有存储NAS数据的目录信息均发送给加速装置。
其中,加速装置与NAS服务端之间仍使用现有技术中协议处理过程,若加速装置与NAS服务端使用的网络协议为传输控制协议/因特网互联协议(Transmission Control Protocol/Internet Protocol,TCP/IP),即如图1B所示协议栈进行数据传输,则加速装置首先将待传输数据转换为NFS数据,例如将用户的访问请求消息转换为NFS可识别的相关参数,将待操作的文件名进行本地解析;然后再将该NFS数据封装为XDR报文,包括将请求参数封装到报文的特定位置;再将该XDR报文封装为RPC报文,封装过程包括在XDR报文中添加RPC序列号、校验码等信息;最后将该RPC报文封装为TCP/IP报文,并通过网卡传输给NAS服务端,以此兼容现有技术中NAS数据访问的协议处理过程。相应地,NAS服务端按照相反的顺序依次解析加速装置发送的TCP/IP报文,对报文中请 求消息进行处理,并将处理结果发送给加速装置。
可选地,当加速装置与NAS服务端采用用户数据报协议/因特网互联协议(User Datagram Protocol/Intenet Protocol,UDP/IP)进行数据传输时,加速装置首先将待传输数据转换为NFS数据;然后再将该NFS数据封装为XDR报文;再将该XDR报文封装为RPC报文;最后将该RPC报文封装为UDP/IP报文,并通过网卡传输给NAS服务端,相应地,NAS服务端会按照相反的顺序依次解析加速装置发送的UDP/IP报文,对该报文中请求消息进行处理,并将处理结果发送给加速装置。
可选地,当加速装置与NAS服务端采用远程直接数据访问(Remote Direct Memory Access,RDMA)进行数据传输时,加速装置首先将待传输数据转换为NFS数据;然后再将该NFS数据封装为XDR报文;再将该XDR报文封装为RPC报文;最后将该RPC报文封装为RDMA报文,并通过网卡传输给NAS服务端,相应地,NAS服务端会按照相反的顺序依次解析加速装置发送的RDMA报文,对该报文中请求消息进行处理,并将处理结果发送给加速装置。
值得说明的是,NAS服务端与加速装置之间使用不同网络协议时,各层协议报文的封装和解析过程为现有技术,在此不再赘述。
S302、加速装置接收NAS服务端发送的挂载目录信息。
具体地,NAS服务端和加速装置之间使用不同网络协议进行数据传输,加速装置接收NAS服务端发送的携带挂载目录信息的网络协议报文后,解析该报文并获取挂载目录信息,挂载目录信息包括NAS服务端中存储NAS数据的目录。
S303、加速装置将挂载目录信息中的目录挂载在加速装置的本地目录中。
具体地,加速装置根据挂载目录信息在内存中生成本地目录的数据结构,并调用指针函数将挂载目录信息中的存储NAS数据的目录挂载在加速装置的本地目录中。
S304、NAS客户端向加速装置发送第二请求消息。
具体地,第二请求消息用于NAS客户端向加速装置请求将NAS服务端中存储NAS数据的目录发送给NAS客户端。
可选地,NAS客户端可以按照预置配置请求加速装置发送指定层级 的存储NAS数据的目录信息,例如,预置配置要求初始化时,仅请求加速装置发送存储NAS数据的一级根目录信息;或预置配置要求初始化时,请求加速装置将所有存储NAS数据的目录信息均发送给NAS客户端。
值得说明的是,NAS客户端和加速装置可以在存储NAS数据的目录信息中选择相同级别目录挂载到本地目录中,也可以按照预置配置根据用户的操作权限等条件在存储NAS数据的目录信息中选择不同级别目录挂载到本地目录中。
进一步地,NAS客户端和加速装置之间通过DMA控制器进行数据传输,即NAS客户端会将待发送的数据转换为预设的文件***类型描述的格式,生成直接内存访问文件***(Direct Memory Access File System,DMAFS)报文,并通知DMA控制器向加速装置发送该报文,其中,DMA控制器可以由加速装置实现,当NAS客户端需要将NAS客户端生成的DMAFS报文发送给加速装置时,NAS客户端的处理器通过指令(如PCIe指令)通知DMA控制器向加速装置发送NAS客户端生成的DMAFS报文;当加速装置需要将加速装置生成的DMAFS报文发送给NAS客户端时,加速装置处理器通知DMA控制器向NAS客户端发送加速装置生成的DMAFS报文。由此将NAS客户端对NAS协议处理过程移到加速装置,降低NAS客户端中CPU和内存的负载。
可选地,DMA控制器的功能也可以由NAS客户端实现,当NAS客户端需要将NAS客户端生成的DMAFS报文发送给加速装置时,NAS客户端的处理器通过指令(如PCIe指令)通知DMA控制器向加速装置发送NAS客户端生成的DMAFS报文;当加速装置需要将加速装置生成的DMAFS报文发送给NAS客户端时,加速装置处理器通知DMA控制器向NAS客户端发送加速装置生成的DMAFS报文。
预设的文件***类型用于描述DMAFS报文的格式,该预设的文件***类型可以通过运行在NAS客户端的处理器上,与虚拟文件***(Virtual File System,VFS)层对接的DMAFS来实现,DMAFS包含有数据请求消息进行相应操作的具体函数,根据函数可以将数据请求消息转化成预设的文件***类型描述的格式,例如,写操作对应的函数,读操作对应的函数,创建目录对应的函数,删除操作对应的函数,文件偏移量函数,具体的函数本发明实施例对此不进行限定,可以参见现有技 术中每个操作对应的具体函数。
示例地,预设的文件***DMAFS中定义一种文件***类型和四种对象结构,其中,对象结构包括超级块对象、索引(inode)对象、目录项对象(dentry)、文件对象,其中文件***类型用于从***层面定义该文件***所使用的各类函数、以及函数之间的引用关系;超级块对象,用于管理当前文件***,包括索引总数、块(block)总数、inode使用分配等;inode主要指示文件或目录的存储空间以及相应文件操作,如文件名更改、链接文件创建、文件权限修改等;文件对象主要指示已经打开的文件和目录的操作,例如文件内容读取,写入等;索引节点对象,用于记录文件***中目录和文件的索引关系;目录项对象主要是用于缓存目录信息以快速访问文件和目录。在数据处理过程中利用文件***定义的函数对请求消息进行处理,输出预定义的文件格式。
可选地,DMAFS格式的报文如图3A所示,DMAFS格式报文中包括请求号和DMAFS数据,请求号用于标识DMAFS***所处理的请求的编号;DMAFS数据包括操作对象、用户请求的参数、用户请求执行状态和数据,其中,用户请求的参数中包括用户请求的类型(例如只读、只写)、读写数据的长度、读写数据的偏移量;用户请求的执行状态用于标识用户请求执行结果(如成功、失败);数据用于表示读请求的操作或写请求的操作所对应的目标数据,例如,当用户请求是对目标数据的读请求操作时,NAS客户端向加速装置发送的DMAFS报文中“数据”字段为空,在加速装置发送给NAS客户端的DMAFS报文中“数据”字段存储目标数据;当用户请求是对目标数据的写请求操作时,NAS客户端发送给加速装置的DMAFS报文中“数据”字段存储目标数据,在加速装置向NAS客户端发送的DMAFS报文中“数据”字段为空。
可选地,在DMAFS报文中还可以包括报文序列号、报文类型、用户验证信息、用户验证信息校验值,其中,报文序列号用于标识每个报文发送顺序;报文类型用于标识该报文为DMAFS报文;用户验证信息用于标识NAS客户端的用户的访问权限的验证信息,用户验证信息校验值则用于对用户验证信息进行校验。
S305、加速装置向NAS客户端发送存储NAS数据的挂载目录信息。
其中,挂载目录信息即步骤S302中NAS服务端发送给加速装置的 挂载目录信息。
值得说明的是,在加速装置的处理器上也同样运行着预设的文件***,用于将待发送的数据转换为预设的文件***类型,该文件***与步骤S304相同,在此不再赘述。
S306、NAS客户端将挂载目录信息中的目录挂载在NAS客户端的本地目录中。
具体地,NAS客户端根据挂载目录信息生成本地目录的数据结构,并依次调用指针函数将挂载目录信息中的存储NAS数据的目录挂载在NAS客户端的本地目录中。
通过上述步骤S301至步骤S306的描述,NAS客户端、加速装置和NAS服务端完成初始化过程,将存储NAS数据的目录信息挂载到本地目录中,便于后续进行NAS数据访问。
进一步地,图4为本发明提供的一种NAS数据访问方法的流程示意图,如图所示,所述方法包括:
S401、NAS客户端接收访问请求消息,根据所述访问请求消息中携带的待访问的目标数据的信息确定操作对象。
具体地,访问请求消息中携带待访问的目标数据的信息和操作类型,NAS客户端根据访问请求消息中携带的待访问的目标数据的信息确定操作对象,操作对象包括目标数据所归属的目录和/或文件。
在NAS***中,NAS客户端接收用户的访问请求消息中的目标数据信息为字符串,而文件***能够识别的是文件和目录的索引信息,因此,在接收到用户的访问请求消息后,NAS客户端会利用预设文件***的函数根据待访问的目标数据的信息确定操作对象,即待访问的目标数据所归属的目录和/或文件。
示例地,若用户的访问请求消息中待访问的目标数据为/Root/Dir_a/File_b,则NAS客户端会利用DMAFS中读取函数依次执行如下指令:先读取Root目录下所包含的目录和文件信息、再读取Root/Dir_a目录中所包含目录和文件信息,最后再确定Root/Dir_a目录中存在File_b文件,此时,NAS客户端将用户的访问请求消息中的目标数据的字符串信息转换为NFS能够识别的文件和目录信息。
值得说明的是,如图1B所示的NAS协议栈中,NAS客户端接收到 用户的访问请求消息后,会经过VFS层的数据转发,其中,VFS的作用是为各类文件***提供了一个统一的操作界面和应用编程接口。是一个可以让读、写请求不用关心底层的存储介质和文件***类型就可以工作的粘合层,即VFS层主要完成访问请求消息的转发,并未对访问请求消息进行处理。
S402、NAS客户端根据预设的文件***类型,生成第一DMAFS报文。
具体地,预设的文件***类型用于描述DMAFS报文的格式,第一DMAFS报文中包括操作对象和操作类型。
值得说明的是,DMAFS报文的格式与步骤S304中操作过程相同,在此不再赘述。
S403、NAS客户端向加速装置发送第一DMAFS报文。
S404、加速装置获取第一DMAFS报文中的操作类型和操作对象。
S405、加速装置将所述操作对象和所述操作类型转换为网络文件***NFS数据,以及将该NFS数据封装为网络协议报文。
具体地,为兼容现有技术中NAS协议栈的处理过程,加速装置与NAS服务端之间仍使用现有技术中协议处理过程,即加速装置在获取到NAS客户端发送的操作类型和操作对象后,会先经过NFS层数据转换过程,如获取操作对象和操作类型的相关参数(如接收数据的地址),将参数信息存储到关联的数据结构;再将NFS数据封装为网络协议报文,其中,网络协议可以为TCP/IP、或UDP/IP、或RDMA。
值得说明的是,加速装置将访问请求消息中的操作对象和操作类型转化为网络协议报文的过程为现有技术,与步骤S301相同,在此不再赘述。
S406、加速装置向NAS服务端发送网络协议报文。
其中,网络协议报文按照网络协议不同,可以为步骤S405中的TCP/IP报文,也可以为UDP/IP报文或RDMA报文。
进一步地,NAS服务端在接收到网络协议报文后,会根据网络协议报文中携带的操作对象和操作类型对目标数据执行读请求的操作或写请求的操作,并将对目标数据的操作结果返回给加速装置,由加速装置再将操作结果返回给NAS客户端。
图5为一种基于TCP/IP网络协议的简化协议栈的示意图,与图1B对比 可知,对于NAS客户端,由于数据访问请求无需经过NFS、XDR、RPC、TCP/IP的处理过程,而是通过DMA方式直接发送给加速装置,由加速装置进一步完成其他部分的协议处理过程,故与现有技术所提供的协议栈相比,本发明实施例通过加速装置卸载现有技术中NAS客户端对协议的处理过程,可以简化NAS客户端的协议处理过程,从而降低了NAS客户端的CPU负荷。
其中,图5中是一种由加速装置实现DMA控制器的示例,那么,DMA服务端部署在加速装置侧,DMA客户端部署在NAS客户端侧。相应的,当DMA控制器由NAS客户端实现时,DMA服务端在NAS客户端侧,DMA客户端在加速装置侧。
可选地,当NAS服务端与加速装置之间数据传输协议使用UDP/IP、RDMA时,与现有技术中协议处理过程相比,通过将NAS客户端对协议处理过程卸载到加速装置,由加速装置完成NFS到UDP/IP,或NFS到RDMA的转换过程,也可以简化NAS客户端的协议处理过程,从而降低了NAS客户端的CPU负荷。
通过上述步骤S401至步骤S406的描述,通过在NAS客户端添加加速装置,由加速装置完成现有协议栈中自NFS层以后的协议处理过程,解决了NAS客户端因厚重的协议处理而导致的CPU负载高、内存占用多、处理时延长的问题,提高了NAS客户端的整体性能和数据访问效率。进一步的,NAS客户端和加速装置之间利用DMA引擎进行数据传输,在DMA传输数据过程中,CPU不参与工作,这样就很大程度上减轻了CPU资源占有率,从而提高了CPU效率,减少了NAS数据访问的时延。
在一个可能的实施例中,加速装置中还可以包括数据缓存区,用于作为NFS的缓存区,解决现有技术中NFS的Cache容量小,命中率低,延时高的问题。接下来,结合图6A和图6B分别介绍操作类型为读请求和写请求时NAS数据访问的处理过程。
图6A为本发明提供的一种操作类型为读请求的NAS数据访问方法的流程示意图,如图所示,所述方法包括:
S601、NAS客户端接收访问请求消息,根据所述访问请求消息中携带的待访问的目标数据的信息确定操作对象。
S602、NAS客户端根据预设的文件***类型,生成第一DMAFS报文。
S603、NAS客户端向加速装置发送第一DMAFS报文。
S604、加速装置获取第一DMAFS报文中的操作类型和操作对象。
步骤S601至S604与步骤S401至S404的处理过程相同,在此不再赘述。
S605、当数据缓存区中存在目标数据时,加速装置获取数据缓存区中的目标数据和目标数据所归属的目录和/或文件。
具体地,加速装置的数据缓存区中可以存储已访问的NAS数据和该数据归属的目录和/或文件的历史数据,当待访问的目标数据为历史数据时,可以通过加速装置的数据缓存区直接获取,由此提高NAS数据的访问效率,缩短数据访问的时延,并执行步骤S609。当数据缓存区中不存在目标数据时,则执行步骤S606至步骤S609。
可选地,可以通过预置阈值控制加速装置的数据缓存区中存储的数据的容量,当缓存区中容量达到预置阈值时,加速装置可以按照预置配置删除最先存储的指定容量的历史数据。
S606(可选)、当数据缓存区中不存在目标数据时,向NAS服务端发送操作类型和操作对象。
具体地,当加速装置的数据缓存区中不存在目标数据时,加速装置利用如步骤S405和步骤S406所示的方法,按照现有技术的协议处理过程,将携带操作类型和操作对象的网络协议报文发送给NAS服务端。
S607(可选)、NAS服务端向加速装置发送对目标数据的操作结果。
具体地,NAS服务端在接收到步骤S606中发送的携带操作对象和操作类型的网络协议报文后,会解析该报文,并根据该报文携带的操作类型和操作对象对目标数据执行操作,并将操作结果封装为网络协议报文发送给加速装置,其中,操作结果中包括目标数据和目标数据所归属的目录和/或文件。
值得说明的是,步骤S606和步骤S607中NAS服务端和加速装置之间传输数据的网络协议报文的封装和解析过程与步骤S301和步骤S302相同,在此不再赘述。
S608(可选)、加速装置将所述操作结果存储在所述数据缓存区。
S609、加速装置根据预设的文件***类型,生成第二DMAFS报文。
其中,第二DMAFS报文中包括操作结果,即第二DMAFS报文中包括目标数据和目标数据所归属的目录和/或文件,第二DMAFS报文的生成过程与步骤S402相同,在此不再赘述。
S610、加速装置向NAS客户端发送第二DMAFS报文。
进一步地,当目标数据未存储在加速装置的数据缓存区时,加速装置会根据操作操作结果更新本地目录信息,更新过程与步骤S303中所述方法相同,在此不再赘述。相应地,NAS客户端也会根据操作结果更新本地目录信息,更新过程与步骤S306中所述方法相同,在此不再赘述。
作为一个可能的实施例,图6B为操作类型为写请求时的NAS数据访问方法的流程示意图,如图所示,所述方法包括:
S611、NAS客户端接收访问请求消息,根据所述访问请求消息中携带的待访问的目标数据的信息确定操作对象。
S612、NAS客户端根据预设的文件***类型,生成第一DMAFS报文。
S613、NAS客户端向加速装置发送第一DMAFS报文。
S614、加速装置获取第一DMAFS报文中的操作类型和操作对象。
值得说明的是,步骤S611至S614与步骤S401至S404的处理过程相同,在此不再赘述。
S615、当数据缓存区中存在目标数据时,加速装置对目标数据执行写请求的操作。
当数据缓存区中存在目标数据时,加速装置根据操作类型对目标数据执行写请求的操作。
S616、加速装置向NAS服务端发送操作类型和操作对象。
具体地,加速装置的数据缓存区中存储的是已访问的历史数据,当操作类型为写操作时,加速装置对数据缓存区中数据进行修改后,还需要将操作类型和操作对象发送给NAS服务端,由NAS服务端对存储的目标数据执行写操作。
S617、加速装置接收NAS服务端发送的对目标数据的写请求操作的响应信息。
具体地,NAS服务端在对目标数据执行写请求的操作后,会向加速装置发送写请求操作的响应信息,该响应信息用于指示对所述目标数据写操作是否执行成功。
值得说明的是,步骤S617中NAS服务端利用网络协议报文向加速装置发送响应信息,具体过程与步骤S301相同,此处不再赘述。
当数据缓存区中不存在目标数据时,则执行步骤S618的操作过程。
S618(可选)、当数据缓存区中不存在目标数据时,加速装置向NAS服务端发送操作类型和操作对象。
S619(可选)、加速装置接收NAS服务端发送对目标数据的写操作的响应信息。
S620(可选)、加速装置将目标数据存储在数据缓存区。
具体地,对于写请求的操作,加速装置在接收到NAS服务端对目标数据的写请求的操作的响应信息后,会将目标数据存储在数据缓存区中,即将目标数据和目标数据所归属的目录和/或文件信息存储在数据缓存区中,以便于处理后续执行读请求的操作时,能够从数据缓存区中快速查找目标数据,提升读数据的性能。
S621、加速装置根据预设的文件***类型,生成第二DMAFS报文。
S622、加速装置向NAS客户端发送第二DMAFS报文。
步骤S621至S622的操作过程与步骤S609至步骤S610相同,在此不再赘述。
进一步地,当目标数据未存储在加速装置的数据缓存区时,加速装置会根据操作操作结果更新本地目录信息,更新过程与步骤S303中所述方法相同,在此不再赘述。相应地,NAS客户端也会根据操作结果更新本地目录信息,更新过程与步骤S306中所述方法相同,在此不再赘述。
综上所述,通过加速装置对NAS客户端协议的卸载过程,解决了现有技术中NAS数据访问过程中NAS客户端CPU负载和内存占用率高,以及处理时延长的问题。进一步地,利用加速装置的数据缓存区存储历史数据,可以减少读处理过程中的访问时延,提高NAS数据访问的读处理效率。相比于现有技术中媒资行业NAS客户端的缓存区受限所导致的网络文件***关联的Cache容量小所导致命中率低,以及数据访问时延高问题。通过加速装置将网络文件***关联的Cache移到加速装置上,从VFS下发的访问请求NAS客户端不做缓存,由此解决现有技术中网络文件***关联的Cache容量小所导致命中率低,以及数据访问时延高问题,提高NAS数据访问的处理效率。
应理解,在本发明的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应 对本发明实施例的实施过程构成任何限定。
值得说明的是,对于上述方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作并不一定是本发明所必须的。
本领域的技术人员根据以上描述的内容,能够想到的其他合理的步骤组合,也属于本发明的保护范围内。其次,本领域技术人员也应该熟悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作并不一定是本发明所必须的。
上文中结合图2A至图6B,详细描述了根据本发明实施例所提供的存储***中业务链路切换的方法,下面将结合图7至图10,描述根据本发明实施例所提供的NAS数据访问的NAS客户端和加速装置。
图7为本发明提供的一种NAS客户端700的示意图,如图所示,NAS客户端700包括接收单元701、处理单元702、发送单元703;
所述接收单元701,用于接收用户的访问请求消息。
所述处理单元702,用于根据接收单元701接收的所述访问请求消息中携带的待访问的目标数据的信息确定操作对象,所述操作对象中包括所述目标数据所归属的目录和/或文件;根据预设的文件***类型描述的格式,生成第一DMAFS报文,所述预设的文件***类型用于描述DMAFS报文的格式,所述第一DMAFS报文中包括所述操作对象和所述访问请求消息中携带的操作类型。
所述发送单元703,用于向所述加速装置发送第一DMAFS报文。
可选地,所述发送单元703,还用于在所述NAS客户端接收访问请求消息之前,向所述加速装置发送第三DMAFS报文,所述第三DMA报文用于向所述加速装置请求存储NAS数据的目录。
所述接收单元701,还用于接收所述加速装置发送的所述挂载目录信息,并将所述挂载目录信息中携带的存储所述NAS数据的目录挂载到本地目录中。
可选地,所述接收单元701,还用于所述加速装置发送第二DMAFS报文,所述第二DMAFS报文中携带针对所述第一DMAFS报文的操作结果, 所述操作结果中包括所述目标数据和所述目标数据所归属的目录和/或文件。
可选地,处理单元702,还用于根据所述操作结果中的所述目标数据所归属的目录和/或文件信息更新所述NAS客户端的本地目录。
应理解,根据本发明实施例的NAS客户端700可对应于执行本发明实施例中描述的方法,并且NAS客户端700中的各个单元的上述和其它操作和/或功能分别为了实现图2A至图6B中的各个方法的相应流程,为了简洁,在此不再赘述。
在本实施例中,通过在NAS客户端添加加速装置,由加速装置完成现有协议栈中自NFS层以后的协议处理过程,解决了NAS客户端因厚重的协议处理而导致的CPU负载高、内存占用多、处理时延长的问题,提高了NAS客户端的整体性能和数据访问效率。
图8为一种NAS客户端800的硬件结构示意图,如图所示,该NAS客户端800包括一个或多个(图中仅示出一个)处理器801、存储器802、以及通信总线805。本领域普通技术人员可以理解,图8所示的结构仅为示意,其并不对NAS客户端800的结构造成限定。例如,NAS客户端800还可包括比图8中所示更多或者更少的组件,或者具有与图8所示不同的配置。
处理器801、存储器802之间通过通信总线805连接并完成相互间的通信,所述存储器802中用于存储计算机执行指令,所述NAS客户端800运行时,所述处理器801执行所述存储器中的计算机执行指令以利用所述NAS客户端800中的硬件资源执行以下操作:
接收访问请求消息,根据所述访问请求消息中携带的待访问的目标数据的信息确定操作对象,所述操作对象中包括所述目标数据所归属的目录和/或文件;
根据预设的文件***类型描述的格式,生成第一直接内存访问文件***DMAFS报文,所述预设的文件***类型用于描述DMAFS报文的格式,所述第一DMAFS报文中包括所述操作对象和所述访问请求消息中携带的操作类型;
向所述加速装置发送第一DMAFS报文,以便所述加速装置将所述第一DMAFS报文中的所述操作对象和所述操作类型转换为网络文件***NFS数据以及将所述NFS数据封装为网络传输协议报文发送给NAS服务 端。
其中,通信总线805用于NAS客户端800中各组成部件之间的通信。
处理器801通过运行存储在存储器802内的软件程序以及模块(如虚拟文件***8011、直接内存访问文件***8012),从而执行各种功能应用以及数据处理,例如,处理器801通过调用存储器802中的对所述操作类型和操作对象进行封装的程序指令,将所述对目标数据的操作结果封装成DMAFS格式的报文。
应理解,在本发明实施例中,该处理器801可以是CPU,该处理器801还可以是其他通用处理器、数字信号处理器(DSP)、ARM处理器、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
进一步地,本发明实施例提供的NAS客户端800还包括DMA控制器806,该DMA控制器806集成在NAS客户端800的硬件板卡上,DMA控制器806的访问接口与运行在处理器上的直接内存访问文件***、网络文件***对接。所述DMA控制器806在所述处理器的控制下能够通过PCIe总线实现DMA控制器806和加速装置之间的数据传输,也就是说,DMA控制器806能够将NAS客户端生成的DMAFS报文从NAS客户端搬迁到加速装置,或者将加速装置生成的DMAFS报文从加速装置搬迁到NAS客户端,而不用经过处理器801的运算,从而使得计算机***处理速度加快,有效的提升了数据传输的效能。可选地,DMA控制器的功能也可以由NAS客户端800的处理器801实现。
存储器802可用于存储软件程序、模块、以及数据库,如本发明实施例中处理器801向所述DMA控制器806发送操作类型和操作对象对应的程序指令/模块,将NAS客户端生成的DMAFS报文搬迁到加速装置,或将加速装置生成的DMAFS报文搬迁到NAS客户端。存储器802可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器802可进一步包括相对于处理器801远程设置的存储器,这些远程存储器可以通过网络连接至NAS客户端800。该存储器802可以包括只读存储器和随机存取存储器,并向处理器801提供指令和数据。存储器802的一部分还可以包括非易失性随机存取存储器。例如,存储器802还可以存储设备类型的信息。
可选地,NAS客户端800还可以包括用户接口803和网络接口804,其中,用户接口803用于插接外部设备,例如,用户接口803包括图2A至图2D中PCIe接口或高速外设接口,也可以用于连接如触摸屏、鼠标及键盘等设备,以接收用户输入的信息。网络接口804用于NAS客户端800与外部进行互相通信,该网络接口804主要包括有线接口和无线接口,例如网卡、RS232模块、射频模块、WIFI模块等等。
本领域普通技术人员可以理解,图8所示的结构仅为示意,其并不对加速装置的结构造成限定。例如,NAS客户端800还可包括比图8中所示更多或者更少的组件,或者具有与图8所示不同的配置,如,NAS客户端800也可以不包括存储器802,存储器802由NAS客户端外的设备实现。
应理解,根据本发明实施例的NAS客户端800对应于本发明实施例提供的NAS客户端700,该NAS客户端800用于实现图2A至图6B所示方法中NAS客户端的相应流程,为了简洁,在此不再赘述。
综上所述,通过加速装置对NAS客户端协议的卸载过程,解决了现有技术中NAS数据访问过程中NAS客户端CPU负载和内存占用率高,以及处理时延长的问题。进一步地,利用加速装置的数据缓存区存储历史数据,可以减少读处理过程中的访问时延,提高NAS数据访问的读处理效率。相比于现有技术中流媒体行业NAS客户端的缓存区受限所导致的网络文件***关联的Cache容量小所导致命中率低,以及数据访问时延高问题,通过加速装置将网络文件***关联的Cache移到加速装置上,从VFS下发的访问请求NAS客户端不做缓存,由此解决现有技术中网络文件***关联的Cache容量小所导致命中率低,以及数据访问时延高问题,提高NAS数据访问的处理效率。
图9为本发明提供的一种加速装置900的示意图,如图所示,加速装置900中包括接收单元901、处理单元902、发送单元903:
所述接收单元901,用于接收NAS客户端发送的第一DMAFS报文,所述第一DMAFS报文中携带所述操作对象和所述操作类型。
所述处理单元902,用于获取所述第一DMAFS报文中的所述操作对象和所述操作类型;将所述操作对象和所述操作类型转换为网络文件***NFS数据;以及将所述NFS数据封装为网络协议报文。
所述发送单元903,用于向NAS服务端发送所述网络协议报文。
其中,所述网络协议包括以下协议中的任意一种:TCP/IP、UDP/IP、RDMA。
通过上述加速装置900对NAS客户端的协议处理过程进行卸载,减少了NAS客户端的CPU和内存的负载,并且加速装置900的NAS客户端之间以DMA报文进行数据传输,减少了处理时延,提升了整个NAS数据访问过程的效率。
可选地,所述处理单元902,还用于当网络协议为TCP/IP时,将所述NFS数据封装第一外部数据标识XDR报文;将所述第一XDR报文封装为第一远程过程调用RPC报文;以及将所述第一RPC报文封装为第一TCP/IP报文。
可选地,所述处理单元902,还用于当网络协议为UDP/IP时,将所述NFS数据封装第一外部数据标识XDR报文;将所述第一XDR报文封装为第一远程过程调用RPC报文;以及将所述第一RPC报文封装为第一UDP/IP报文。
可选地,所述处理单元902,还用于当网络协议为RDMA时,将所述NFS数据封装第一外部数据标识XDR报文;将所述第一XDR报文封装为第一远程过程调用RPC报文;以及将所述第一RPC报文封装为第一RDMA报文。
可选地,所述发送单元903,还用于在接收单元901接收所述访问请求消息之前,向所述NAS服务端发送第一请求消息,所述第一请求消息用于向所述NAS服务端请求存储NAS数据的目录。
所述接收单元901,还用于接收所述NAS服务端发送的挂载目录信息,所述挂载目录信息中包括所述NAS服务端中存储所述NAS数据的目录的信息。
所述处理单元902,还用于根据所述挂载目录信息,将所述NAS服务端中存储所述NAS数据的目录挂载到本地目录中。
可选地,所述接收单元901,还用于接收第三DMAFS报文,所述第三DMAFS报文用于所述NAS客户端向所述加速装置请求所述存储NAS数据的目录。
所述发送单元903,还用于向所述NAS客户端发送所述挂载目录信息。
可选地,所述处理单元902,还用于当所述数据缓存区中存在所述 目标数据时,根据所述操作对象和所述操作类型对所述目标数据执行操作。
所述发送单元903,还用于向所述NAS客户端发送对所述目标数据的操作结果。
可选地,所述处理单元902,还用于当所述操作类型为读请求时,获取所述数据缓存区中的所述目标数据和所述目标数据所归属的目录和/或文件。
所述发送单元903,还用于向所述NAS客户端发送所述目标数据和所述目标数据所归属的目录和/或文件。
可选地,所述处理单元902,还用于当所述操作类型为写请求时获取所述目标数据,并对所述目标数据执行所述写请求的操作。
所述发送单元903,还用于向所述NAS服务端发送所述操作对象和所述操作类型;向所述NAS客户端发送所述写操作的响应信息。
所述接收单元901,还用于接收NAS服务端对所述目标数据的写操作的响应信息,所述写操作的响应信息用于指示对所述目标数据写操作是否执行成功。
可选地,所述发送单元903,还用于当所述数据缓存区中不存在所述目标数据时,向所述NAS服务端发送所述操作对象和所述操作类型。
所述接收单元901,用于接收所述NAS服务端发送的对所述目标数据的操作结果。
可选地,所述发送单元903,还用于当所述操作类型为读请求,则向所述所述NAS服务端发送所述操作对象和所述操作类型。
所述接收单元901,还用于接收所述NAS服务端发送的对目标数据的读请求的操作结果,所述读请求的操作结果中包括所述目标数据和所述目标数据所归属的目录和/或文件。
所述处理单元902,还用于将所述操作结果存储在所述数据缓存区。
所述发送单元903,还用于向所述NAS客户端发送所述操作结果。
可选地,所述发送单元903,还用于当所述操作类型为写请求时,向所述所述NAS服务端发送所述操作对象和所述操作类型;向所述NAS客户端发送写操作的响应信息。
所述接收单元901,还用于接收接收NAS服务端发送的对所述目标 数据的所述写请求操作结果的响应信息。
可选地,所述接收单元901,还用于接收所述NAS服务端发送的携带对所述目标数据的操作结果的网络协议报文,所述操作结果中包括所述目标数据和所述目标数据所归属的目录和/或文件。
所述处理单元902,还用于根据所述预设的文件***类型,生成第二DMAFS报文,所述第二DMAFS报文中包括所述操作结果。
所述发送单元903,还用于向所述NAS客户端发送第二DMAFS报文。
可选地,所述处理单元902,还用于根据所述操作结果中的所述目标数据所归属的目录和/或文件信息更新所述加速装置的本地目录。
通过上述加速装置900的描述,利用加速装置的数据缓存区存储历史数据,可以减少读处理过程中的访问时延,提高NAS数据访问的读处理效率。另外,相比于现有技术中媒资行业NAS客户端的缓存区受限,导致网络文件***关联的Cache容量小所导致命中率低,以及数据访问时延高问题,通过加速装置将网络文件***关联的Cache移到加速装置上,从VFS下发的访问请求NAS客户端不做缓存,一定程度上减少了NAS数据处理的时延。
图10为本发明实施例提供的一种加速装置1000的示意图,如图所示,所述加速装置1000包括处理器1001、存储器1002、用户接口1003、网络接口1004和通信总线1005。其中,加速装置1000通过所述用户接口1003连接NAS客户端,通过所述网络接口1004与NAS服务端相连,处理器1001、存储器1002、用户接口1003、网络接口1004通过通信总线1005进行通信,也可以通过无线传输等其他手段实现通信。该存储器1002用于存储指令,该处理器1001用于执行该存储器1002存储的指令。该存储器1002存储程序代码,且处理器1001可以调用存储器1002中存储的程序代码执行以下操作:
接收所述NAS客户端发送的第一直接内存访问文件***DMAFS报文,所述第一DMAFS报文中携带所述操作对象和所述操作类型;
获取所述第一DMAFS报文中的所述操作对象和所述操作类型;
将所述操作对象和所述操作类型转换为网络文件***NFS数据;以及将所述NFS数据封装为网络协议报文;
向NAS服务端发送所述网络协议报文。
其中,所述网络协议包括以下协议中的任意一种:传输控制协议/ 因特网互联协议TCP/IP、用户数据报协议/因特网互联协议UDP/IP、远程直接数据存取RDMA。
本领域普通技术人员可以理解,图10所示的结构仅为示意,其并不对加速装置的结构造成限定。例如,加速装置1000还可包括比图10中所示更多或者更少的组件,或者具有与图10所示不同的配置。
通信总线1005用于加速装置中各组成部件之间的通信。该通信总线1005除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都标为通信总线1005。
用户接口1003用于插接外部设备,例如,用户接口1003包括图2A至图2D中PCIe接口或高速外设接口。网络接口304用于NAS客户端与外部进行互相通信,该网络接口1004主要包括有线接口和无线接口,例如网卡、RS232模块、射频模块、WIFI模块等等。
处理器1001通过运行存储在存储器1002内的软件程序以及模块(如直接内存访问文件***10012、网络文件***10013),从而执行各种功能应用以及数据处理,例如,处理器1001通过调用存储器1002中的对所述对目标数据的操作结果进行封装的程序指令,将所述对目标数据的操作结果封装成直接内存访问远程数据DMA格式的报文。
应理解,在本发明实施例中,该处理器1001可以是CPU,该处理器1001还可以是其他通用处理器、数字信号处理器(DSP)、ARM处理器、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
进一步地,本发明实施例提供的加速装置1000还包括DMA控制器1006,DMA控制器1006集成在加速装置的硬件板卡上,DMA控制器1006的访问接口与运行在处理器上的直接内存访问文件***、网络文件***对接。所述DMA控制器1006在所述处理器的控制下能够通过PCIe总线实现DMA控制器1006和加速装置之间的数据传输,也就是说,DMA控制器能够将NAS客户端的DMAFS报文搬迁到加速装置,或将加速装置生成的DMAFS报文搬迁到NAS客户端,而不用经过处理器1001的运算,从而使得计算机***处理速度加快,有效的提升了数据传输的效能。可选地,DMA控制器1006也可以由处理器1001实现。
存储器1002可用于存储软件程序以及模块,如本发明实施例中处理器1001向所述DMA控制器1006发送对目标数据的操作结果对应的程序指令/模块,及存储已处理的NAS访问请求的历史数据。存储器1002可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器1002可进一步包括相对于处理器1001远程设置的存储器,这些远程存储器可以通过网络连接至加速装置。该存储器1002可以包括只读存储器和随机存取存储器,并向处理器1001提供指令和数据。存储器1002的一部分还可以包括非易失性随机存取存储器。
应理解,根据本发明实施例的加速装置1000对应于本发明实施例提供的加速装置900,该加速装置1000用于实现图2A至图6B中对应执行主体根据本发明实施例的所示方法,为了简洁,在此不再赘述。
作为一个可能实施例,本发明中提供一种NAS数据访问的***,该***中包括上述实施例所提供的NAS客户端、及加速装置,所述加速装置包括第一接口和第二接口,加速装置通过第一接口连接NAS客户端,通过第二接口与NAS服务端相连。NAS客户端接收用户请求消息,先通过访问请求消息中携带的待访问的目标数据的信息确定操作对象,操作对象中包括目标数据所归属的目录和/或文件;再根据预设的文件***类型,生成第一DMAFS报文,其中,预设的文件***类型用于描述DMAFS报文的格式,第一DMAFS报文中包括操作对象和访问请求消息中携带的操作类型;然后,再向加速装置发送所述第一DMAFS报文。进一步地,由加速装置获取第一DMAFS报文中的操作类型和操作对象,继续完成现有技术中从NFS到网络协议报文的处理过程,再将携带操作类型和操作对象的网络协议报文发送给NAS服务端。以此解决现有技术中NAS客户端在处理NAS协议处理过程中所带来的CPU和内存负载过高问题。综上所述,本发明实施例中利用加速装置对现有技术中NAS客户端的协议处理过程进行卸载,减少了NAS客户端的处理时延,同时,也减轻了由于厚重的协议处理过程所导致的NAS客户端CPU和内存负载,提升了整个NAS数据访问***的处理效率,减少了处理时延。进一步的,通过预定义的文件***,兼容现有技术中NAS协议栈的处理过程,能够合理应用到现有技术中,有效降低NAS客户端CPU和内存的负载,及处理时延。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的***、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应所述以权利要求的保护范围为准。

Claims (36)

  1. 一种NAS数据访问的***,其特征在于,所述***中包括NAS客户端和加速装置,所述加速装置包括第一接口与第二接口,所述加速装置通过所述第一接口连接所述NAS客户端,通过所述第二接口与NAS服务端相连;
    所述NAS客户端,用于接收访问请求消息,根据所述访问请求消息中携带的待访问的目标数据的信息确定操作对象,所述操作对象包括所述目标数据所归属的目录和/或文件;根据预设的文件***类型,生成第一直接内存访问文件***DMAFS报文,所述预设的文件***类型用于描述DMAFS报文的格式,所述第一DMAFS报文中包括所述操作对象和所述访问请求消息中携带的操作类型;向所述加速装置发送所述第一DMAFS报文;
    所述加速装置,用于接收所述第一DMAFS报文,并获取所述第一DMAFS报文中的所述操作对象和所述操作类型;将所述操作对象和所述操作类型转换为网络文件***NFS数据;以及将所述NFS数据封装为网络协议报文,并向所述NAS服务端发送所述网络协议报文。
  2. 根据权利要求1所述***,其特征在于,所述第一接口为高速***组件互连PCIe接口或高速外设接口,所述第二接口为网卡接口。
  3. 根据权利要求1至2中任一所述***,其特征在于,
    所述加速装置,还用于在所述NAS客户端接收所述访问请求消息之前,向所述NAS服务端发送第一请求消息,所述第一请求消息用于向所述NAS服务端请求存储NAS数据的目录;接收所述NAS服务端发送的挂载目录信息,所述挂载目录信息中包括所述NAS服务端中所述存储NAS数据的目录的信息;根据所述挂载目录信息,将所述NAS服务端中所述存储NAS数据的目录挂载到本地目录中;
    所述NAS客户端,还用于向所述加速装置发送第三DMAFS报文,所述第三DMAFS报文用于向所述加速装置请求所述存储NAS数据的目录;接收所述加速装置发送的所述挂载目录信息,并将所述挂载目录信息中的所述存储NAS数据的目录挂载到本地目录中。
  4. 根据权利要求1至3中任一所述***,其特征在于,
    所述加速装置,还用于接收所述NAS服务端发送的携带对所述目标数据的操作结果的网络协议报文;根据所述预设的文件***类型,生成 第二DMAFS报文,所述第二DMAFS报文中包括所述操作结果;向所述NAS客户端发送所述第二DMAFS报文。
  5. 根据权要求1至3中任一所述***,其特征在于,所述加速装置还包括数据缓存区;
    所述加速装置,还用于当所述数据缓存区中存在所述目标数据时,根据所述操作对象和所述操作类型对所述目标数据执行操作,并向所述NAS客户端发送对所述目标数据的操作结果。
  6. 根据权利要求5所述***,其特征在于,所述加速装置,用于根据所述操作对象和所述操作类型对所述目标数据执行操作,并向所述NAS客户端发送对所述目标数据的操作结果,包括:
    当所述操作类型为读请求时,获取所述数据缓存区中的所述目标数据和所述目标数据所归属的目录和/或文件;向所述NAS客户端发送所述目标数据和所述目标数据所归属的目录和/或文件。
  7. 根据权利要求5所述***,其特征在于,所述加速装置,用于根据所述操作对象和所述操作类型对所述目标数据执行操作,并向所述NAS客户端发送对所述目标数据的操作结果,包括:
    当所述操作类型为写请求时,获取所述目标数据,并对所述目标数据执行所述写请求的操作;向所述NAS服务端发送所述操作对象和所述操作类型;接收NAS服务端对所述目标数据执行所述写请求的操作的响应信息;向所述NAS客户端发送所述写请求的操作的响应信息。
  8. 根据权利要求1至3中任一所述***,其特征在于,所述加速装置还包括数据缓存区;
    所述加速装置,还用于当所述数据缓存区中不存在所述目标数据时,向所述NAS服务端发送所述操作对象和所述操作类型;接收所述NAS服务端发送的对所述目标数据的操作结果。
  9. 根据权利要求8所述***,其特征在于,所述加速装置,用于向所述NAS服务端发送所述操作对象和所述操作类型,接收所述NAS服务端发送的对所述目标数据的操作结果,包括:
    当所述操作类型为读请求,则向所述NAS服务端发送所述操作对象和所述操作类型;接收所述NAS服务端发送的对所述目标数据的读请求的操作结果,所述操作结果中包括所述目标数据和所述目标数据所归属 的目录和/或文件;将所述操作结果存储在所述数据缓存区,并向所述NAS客户端发送所述操作结果。
  10. 根据权利要求8所述***,其特征在于,所述加速装置,用于向所述NAS服务端发送所述操作对象和所述操作类型,接收所述NAS服务端发送的对所述目标数据的操作结果,包括:
    在所述操作类型为写请求时,向所述所述NAS服务端发送所述操作对象和所述操作类型;接收所述NAS服务端发送的对所述目标数据执行所述写请求的操作的响应信息;还用于将所述目标数据存储在所述数据缓存区,并向所述NAS客户端发送所述写请求的操作的响应信息。
  11. 根据权利要求1至10中任一所述***,其特征在于,
    所述加速装置,还用于根据所述操作结果中的所述目标数据所归属的目录和/或文件信息更新所述加速装置的本地目录。
  12. 根据权利要求1至11中任一所述***,其特征在于,
    所述NAS客户端,还用于根据所述操作结果中的所述目标数据所归属的目录和/或文件信息更新所述NAS客户端的本地目录。
  13. 一种NAS数据访问的方法,其特征在于,所述方法应用于NAS***中,所述NAS***中包括NAS客户端和加速装置,所述加速装置包括第一接口与第二接口,所述加速装置通过所述第一接口连接所述NAS客户端,通过所述第二接口与NAS服务端相连,所述方法包括:
    所述加速装置接收所述NAS客户端发送的第一直接内存访问文件***DMAFS报文,所述第一DMAFS报文中携带待访问的目标数据的操作对象和操作类型;
    所述加速装置获取所述第一DMAFS报文中的所述操作对象和所述操作类型;
    所述加速装置将所述操作对象和所述操作类型转换为网络文件***NFS数据;以及将所述NFS数据封装为网络传输协议报文;
    所述加速装置向所述NAS服务端发送所述网络协议报文。
  14. 根据权利要求13所述方法,其特征在于,所述第一接口为高速***组件互连PCIe接口或高速外设接口,所述第二接口为网卡接口。
  15. 根据权利要求13至14中任一所述方法,其特征在于,在所述加速装置接收所述第一DMAFS报文之前,所述方法还包括:
    所述加速装置向所述NAS服务端发送第一请求消息,所述第一请求消息用于向所述NAS服务端请求存储NAS数据的目录;
    所述加速装置接收所述NAS服务端发送的挂载目录信息,所述挂载目录信息中包括所述NAS服务端中所述存储NAS数据的目录的信息;
    所述加速装置根据所述挂载目录信息,将所述NAS服务端中所述存储NAS数据的目录挂载到本地目录中。
  16. 根据权利要求15所述方法,其特征在于,所述方法还包括:
    所述加速装置接收所述NAS客户端发送的第三DMAFS报文,所述第三DMAFS报文用于向所述加速装置请求所述存储NAS数据的目录;
    所述加速装置向所述NAS客户端发送所述挂载目录信息。
  17. 根据权利要求13至16中任一所述方法,其特征在于,所述方法还包括:
    所述加速装置接收所述NAS服务端发送的携带对所述目标数据的操作结果的网络协议报文,所述操作结果中包括所述目标数据和所述目标数据所归属的目录和/或文件;
    所述加速装置根据所述预设的文件***类型,生成第二DMAFS报文,所述第二DMAFS报文中包括所述操作结果,所述预设的文件***类型用于描述DMAFS报文的格式;向所述NAS客户端发送所述第二DMAFS报文。
  18. 根据权利要求13至16所述方法,其特征在于,所述加速装置还包括数据缓存区,所述方法还包括:
    当所述数据缓存区中存在所述目标数据时,所述加速装置根据所述操作对象和所述操作类型对所述目标数据执行操作,并向所述NAS客户端发送对所述目标数据的操作结果。
  19. 根据权利要求18所述方法,其特征在于,
    所述加速装置根据所述操作对象和所述操作类型对所述目标数据执行操作,并向所述NAS客户端发送对所述目标数据的操作结果,包括:
    当所述操作类型为读请求时,获取所述数据缓存区中的所述目标数据和所述目标数据所归属的目录和/或文件;向所述NAS客户端发送所述目标数据和所述目标数据所归属的目录和/或文件。
  20. 根据权利要求18所述方法,其特征在于,所述加速装置根据所述操作对象和所述操作类型对所述目标数据执行操作,并向所述NAS 客户端发送对所述目标数据的操作结果,包括:
    当所述操作类型为写请求,获取所述目标数据,并对所述目标数据执行所述写请求的操作;向所述NAS服务端发送所述操作对象和所述操作类型;接收NAS服务端对所述目标数据执行所述写请求的操作的响应信息;
    向所述NAS客户端发送所述写请求的操作的响应信息。
  21. 根据权利要求13至16所述方法,其特征在于,所述加速装置还包括数据缓存区,所述方法包括:
    当所述数据缓存区中不存在所述目标数据时,向所述NAS服务端发送所述操作对象和所述操作类型;接收所述NAS服务端发送的对所述目标数据的操作结果。
  22. 根据权利要求21所述方法,其特征在于,所述向所述NAS服务端发送所述操作对象和所述操作类型;接收所述NAS服务端发送的对所述目标数据的操作结果,包括:
    当所述操作类型为读请求,则向所述NAS服务端发送所述操作对象和所述操作类型;接收所述NAS服务端发送的对目标数据的读请求的操作结果,所述读请求的操作结果中包括所述目标数据和所述目标数据所归属的目录和/或文件;将所述操作结果存储在所述数据缓存区,并向所述NAS客户端发送所述操作结果。
  23. 根据权利要求21所述方法,其特征在于,所述向所述NAS服务端发送所述操作对象和所述操作类型;接收所述NAS服务端发送的对所述目标数据的操作结果,包括:
    在所述操作类型为写请求时,向所述所述NAS服务端发送所述操作对象和所述操作类型;接收所述NAS服务端发送的对所述目标数据执行所述写请求的操作结果的响应信息;将所述目标数据存储在所述数据缓存区;并向所述NAS客户端发送所述写请求的响应信息。
  24. 根据权利要求17至23所述方法,其特征在于,所述方法包括:
    所述加速装置根据所述操作结果中的所述目标数据所归属的目录和/或文件信息更新所述加速装置的本地目录。
  25. 一种NAS数据访问的方法,其特征在于,所述方法应用于NAS***中,所述NAS***中包括NAS客户端和加速装置,所述加速装置包括第一 接口与第二接口,所述加速装置通过所述第一接口连接所述NAS客户端,通过所述第二接口与NAS服务端相连,所述方法包括:
    所述NAS客户端接收访问请求消息,根据所述访问请求消息中携带的待访问的目标数据的信息确定操作对象,所述操作对象中包括所述目标数据所归属的目录和/或文件;
    所述NAS客户端根据预设的文件***类型,生成第一直接内存访问文件***DMAFS报文,所述预设的文件***类型用于描述DMAFS报文的格式,所述第一DMAFS报文中包括所述操作对象和所述访问请求消息中携带的操作类型;
    所述NAS客户端向所述加速装置发送所述第一DMAFS报文,以便所述加速装置将所述第一DMAFS报文中的所述操作对象和所述操作类型转换为网络文件***NFS数据以及将所述NFS数据封装为网络传输协议报文发送给所述NAS服务端。
  26. 根据权利要求25所述方法,其特征在于,所述第一接口为高速***组件互连PCIe接口或高速外设接口,所述第二接口为网卡接口。
  27. 根据权利要求25至26中任一所述方法,其特征在于,所述方法还包括:
    所述NAS客户端接收所述加速装置发送的第二DMAFS报文,所述第二DMAFS报文中携带针对所述第一DMAFS报文的操作结果。
  28. 根据权利要求25至26中任一所述方法,其特征在于,在所述NAS客户端接收访问请求消息之前,所述方法包括:
    所述NAS客户端向所述加速装置发送第三DMAFS报文,所述第三DMAFS报文用于向所述加速装置请求所述NAS服务端中存储NAS数据的目录;
    所述NAS客户端接收所述加速装置发送的挂载目录信息,并将所述挂载目录信息中携带的所述存储NAS数据的目录挂载到本地目录中。
  29. 根据权利要求27或28所述方法,其特征在于,所述方法包括:
    所述NAS客户端根据所述操作结果中的所述目标数据所归属的目录和/或文件信息更新所述NAS客户端的本地目录。
  30. 一种NAS数据访问的加速装置,其特征在于,所述加速装置包括接收单元、处理单元和发送单元;
    所述接收单元,用于接收NAS客户端发送的第一直接内存访问文件***DMAFS报文,所述第一DMAFS报文中携带待访问的目标数据的操作对象和操作类型;将所述操作对象和所述操作类型转换为网络文件***NFS数据;以及将所述NFS数据封装为网络传输协议报文;
    所述处理单元,用于获取所述第一DMAFS报文中的所述操作对象和所述操作类型;
    所述发送单元,用于向NAS服务端发送所述网络协议报文。
  31. 根据权利要求30所述装置,其特征在于,
    所述接收单元,还用于接收所述NAS服务端发送的携带对所述目标数据的操作结果的网络协议报文,所述操作结果中包括所述目标数据和所述目标数据所归属的目录和/或文件;
    所述处理单元,还用于根据所述预设的文件***类型,生成第二DMAFS报文,所述第二DMAFS报文中包括所述操作结果,所述预设的文件***类型用于描述DMAFS报文的格式;向所述NAS客户端发送所述第二DMAFS报文。
  32. 根据权利要求30至31所述装置,其特征在于,所述加速装置还包括数据缓冲区;
    所述处理单元,还用于当所述数据缓存区中存在所述目标数据时,所述加速装置根据所述操作对象和所述操作类型对所述目标数据执行操作;
    所述发送单元,还用于向所述NAS客户端发送对所述目标数据的操作结果。
  33. 根据权利要求30至31所述装置,其特征在于,所述加速装置还包括数据缓冲区;
    所述发送单元,还用于当所述数据缓存区中不存在所述目标数据时,向所述NAS服务端发送所述操作对象和所述操作类型;
    所述接收单元,还用于接收所述NAS服务端发送的对所述目标数据的操作结果。
  34. 一种NAS数据访问的NAS客户端,其特征在于,所述NAS客户端包括接收单元、处理单元和发送单元;
    所述接收单元,用于接收访问请求消息,根据所述访问请求消息中 携带的待访问的目标数据的信息确定操作对象,所述操作对象中包括所述目标数据所归属的目录和/或文件;
    所述处理单元,用于根据预设的文件***类型,生成第一直接内存访问文件***DMAFS报文,所述预设的文件***类型用于描述DMAFS报文的格式,所述第一DMAFS报文中包括所述操作对象和所述访问请求消息中携带的操作类型;
    所述发送单元,用于向所述加速装置发送所述第一DMAFS报文。
  35. 一种NAS数据访问的加速装置,其特征在于,所述加速装置中包括处理器、存储器、用户接口、网络接口、通信总线,所述加速装置通过所述用户接口连接NAS客户端,通过所述网络接口与NAS服务端相连,所述处理器、存储器、用户接口、和网络接口之间通过所述通信总线连接并完成相互间的通信,所述存储器中用于存储计算机执行指令,所述加速装置运行时,所述处理器执行所述存储器中的计算机执行指令执行权利要求13至24中任一所述的方法。
  36. 一种NAS数据访问的NAS客户端,其特征在于,所述NAS客户端中包括处理器、存储器、通信总线,所述处理器、存储器之间通过通信总线连接并完成相互间的通信,所述存储器中用于存储计算机执行指令,所述NAS客户端运行时,所述处理器执行所述存储器中的计算机执行指令执行权利要求25至29中任一所述的方法。
PCT/CN2016/108238 2015-12-30 2016-12-01 一种nas数据访问的方法、***及相关设备 WO2017114091A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201680001864.2A CN108028833B (zh) 2015-12-30 2016-12-01 一种nas数据访问的方法、***及相关设备
EP16880881.4A EP3288232B1 (en) 2015-12-30 2016-12-01 Nas data access method and system
US16/020,754 US11275530B2 (en) 2015-12-30 2018-06-27 Method, system, and related device for NAS data access

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201511026076.2 2015-12-30
CN201511026076 2015-12-30

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/020,754 Continuation US11275530B2 (en) 2015-12-30 2018-06-27 Method, system, and related device for NAS data access

Publications (1)

Publication Number Publication Date
WO2017114091A1 true WO2017114091A1 (zh) 2017-07-06

Family

ID=59224497

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/108238 WO2017114091A1 (zh) 2015-12-30 2016-12-01 一种nas数据访问的方法、***及相关设备

Country Status (4)

Country Link
US (1) US11275530B2 (zh)
EP (1) EP3288232B1 (zh)
CN (1) CN108028833B (zh)
WO (1) WO2017114091A1 (zh)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107690622B9 (zh) * 2016-08-26 2020-09-22 华为技术有限公司 实现硬件加速处理的方法、设备和***
CN108885671B (zh) * 2016-11-16 2021-06-22 华为技术有限公司 一种目录删除方法、装置和存储服务器
CN108897632A (zh) * 2018-07-18 2018-11-27 杭州鑫合汇互联网金融服务有限公司 一种消息***及消息发送方法
CN109240995B (zh) * 2018-08-22 2022-05-10 郑州云海信息技术有限公司 一种操作字时延的统计方法和装置
CN109246238A (zh) * 2018-10-15 2019-01-18 中国联合网络通信集团有限公司 内容缓存加速方法及网络设备
CN110555009B (zh) * 2019-08-09 2023-01-10 苏州浪潮智能科技有限公司 一种网络文件***nfs服务的处理方法及装置
CN111698239A (zh) * 2020-06-08 2020-09-22 星辰天合(北京)数据科技有限公司 基于网络文件***的应用控制方法及装置、***
CN112148678B (zh) * 2020-09-18 2023-01-06 苏州浪潮智能科技有限公司 一种文件访问方法、***、设备以及介质
CN113395293B (zh) * 2021-07-13 2023-09-15 上海睿赛德电子科技有限公司 一种基于rpc的网络套接字实现方法
CN113407366A (zh) * 2021-07-15 2021-09-17 北京字节跳动网络技术有限公司 一种远程调用方法及装置以及***
US20230073627A1 (en) * 2021-08-30 2023-03-09 Datadog, Inc. Analytics database and monitoring system for structuring and storing data streams
CN113890896A (zh) * 2021-09-24 2022-01-04 中移(杭州)信息技术有限公司 网络访问方法、通信设备及计算机可读存储介质
CN114285839B (zh) * 2021-12-23 2023-11-10 北京天融信网络安全技术有限公司 一种文件传输方法、装置、计算机存储介质和电子设备
CN115102972A (zh) * 2022-07-15 2022-09-23 济南浪潮数据技术有限公司 一种存储nfs文件的方法、装置、设备及介质
CN115048227B (zh) * 2022-08-15 2022-12-09 阿里巴巴(中国)有限公司 数据处理方法、***及存储介质
CN116126812B (zh) * 2023-02-27 2024-02-23 开元数智工程咨询集团有限公司 一种工程行业文件存储与集成的方法与***
CN116450058B (zh) * 2023-06-19 2023-09-19 浪潮电子信息产业股份有限公司 数据转存方法、装置、异构平台、设备及介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1473300A (zh) * 2000-09-29 2004-02-04 智能网络存储接口***和装置
US20100319044A1 (en) * 2009-06-16 2010-12-16 Seachange International, Inc. Efficient Distribution of Remote Storage Data
CN102171670A (zh) * 2008-09-30 2011-08-31 惠普开发有限公司 基于nas的多媒体文件分发服务
CN105052081A (zh) * 2012-12-26 2015-11-11 科缔纳股份有限公司 通信流量处理架构和方法

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8539112B2 (en) 1997-10-14 2013-09-17 Alacritech, Inc. TCP/IP offload device
US7155458B1 (en) * 2002-04-05 2006-12-26 Network Appliance, Inc. Mechanism for distributed atomic creation of client-private files
US7565413B1 (en) * 2002-08-05 2009-07-21 Cisco Technology, Inc. Content request redirection from a wed protocol to a file protocol
WO2004077211A2 (en) * 2003-02-28 2004-09-10 Tilmon Systems Ltd. Method and apparatus for increasing file server performance by offloading data path processing
US7330862B1 (en) * 2003-04-25 2008-02-12 Network Appliance, Inc. Zero copy write datapath
US7272654B1 (en) * 2004-03-04 2007-09-18 Sandbox Networks, Inc. Virtualizing network-attached-storage (NAS) with a compact table that stores lossy hashes of file names and parent handles rather than full names
CN2842562Y (zh) 2005-07-27 2006-11-29 韩泽耀 网络附加存储***芯片硬件结构及基于该***的网络***
US9390019B2 (en) * 2006-02-28 2016-07-12 Violin Memory Inc. Method and apparatus for providing high-performance and highly-scalable storage acceleration
CN101237400A (zh) 2008-01-24 2008-08-06 创新科存储技术(深圳)有限公司 网络附加存储服务的迁移方法及网络附加存储节点
US9088592B1 (en) * 2011-11-08 2015-07-21 Alacritech, Inc. Network cache accelerator
US10057387B2 (en) 2012-12-26 2018-08-21 Realtek Singapore Pte Ltd Communication traffic processing architectures and methods
US10642505B1 (en) * 2013-01-28 2020-05-05 Radian Memory Systems, Inc. Techniques for data migration based on per-data metrics and memory degradation
CN103345482A (zh) 2013-06-20 2013-10-09 上海爱数软件有限公司 一种网络存储***及其文件访问冲突处理方法
US9036283B1 (en) * 2014-01-22 2015-05-19 Western Digital Technologies, Inc. Data storage device with selective write to a first storage media or a second storage media

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1473300A (zh) * 2000-09-29 2004-02-04 智能网络存储接口***和装置
CN102171670A (zh) * 2008-09-30 2011-08-31 惠普开发有限公司 基于nas的多媒体文件分发服务
US20100319044A1 (en) * 2009-06-16 2010-12-16 Seachange International, Inc. Efficient Distribution of Remote Storage Data
CN105052081A (zh) * 2012-12-26 2015-11-11 科缔纳股份有限公司 通信流量处理架构和方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3288232A4 *

Also Published As

Publication number Publication date
EP3288232B1 (en) 2020-03-25
US11275530B2 (en) 2022-03-15
CN108028833B (zh) 2020-05-08
EP3288232A1 (en) 2018-02-28
EP3288232A4 (en) 2018-06-13
US20180314433A1 (en) 2018-11-01
CN108028833A (zh) 2018-05-11

Similar Documents

Publication Publication Date Title
WO2017114091A1 (zh) 一种nas数据访问的方法、***及相关设备
US11775569B2 (en) Object-backed block-based distributed storage
WO2018137217A1 (zh) 一种数据处理的***、方法及对应装置
JP2019507409A (ja) 従来のファイルシステムインターフェースを使用することによってクラウドストレージサービスにアクセスする方法及び装置
EP2656552B1 (en) Third party initiation of communications between remote parties
WO2023155526A1 (zh) 一种数据流处理方法、存储控制节点及非易失性可读存储介质
CN111966446B (zh) 一种容器环境下rdma虚拟化方法
EP4318251A1 (en) Data access system and method, and device and network card
WO2023005747A1 (zh) 数据传输方法、装置及分布式存储***
WO2021073546A1 (zh) 数据访问方法、装置和第一计算设备
CN106648838B (zh) 一种资源池管理的配置方法及装置
WO2016101856A1 (zh) 数据访问方法及装置
CN116049085A (zh) 一种数据处理***及方法
CN115202573A (zh) 数据存储***以及方法
WO2020083067A1 (zh) 资源管理的方法和装置
WO2015196899A1 (zh) 一种实现ip盘文件存储的方法及装置
US20230342087A1 (en) Data Access Method and Related Device
CN117135189A (zh) 服务器的访问方法及装置、存储介质、电子设备
WO2015055008A1 (zh) 一种存储控制芯片及磁盘报文传输方法
US8429209B2 (en) Method and system for efficiently reading a partitioned directory incident to a serialized process
WO2014077451A1 (ko) Iscsi 스토리지 시스템을 이용한 네트워크 분산 파일 시스템 및 방법
EP3955524A1 (en) Method for managing remote storage device by means of management device
JP2017184195A (ja) 通信管理装置、通信管理方法及びプログラム
US20240069754A1 (en) Computing system and associated method
JP2014235531A (ja) データ転送装置、データ転送システム、およびプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16880881

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2016880881

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE