WO2017088664A1 - 集群文件***的数据处理方法和装置 - Google Patents

集群文件***的数据处理方法和装置 Download PDF

Info

Publication number
WO2017088664A1
WO2017088664A1 PCT/CN2016/105219 CN2016105219W WO2017088664A1 WO 2017088664 A1 WO2017088664 A1 WO 2017088664A1 CN 2016105219 W CN2016105219 W CN 2016105219W WO 2017088664 A1 WO2017088664 A1 WO 2017088664A1
Authority
WO
WIPO (PCT)
Prior art keywords
storage node
request
read
data
file system
Prior art date
Application number
PCT/CN2016/105219
Other languages
English (en)
French (fr)
Inventor
张勤
李璐
Original Assignee
深圳市中博科创信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市中博科创信息技术有限公司 filed Critical 深圳市中博科创信息技术有限公司
Publication of WO2017088664A1 publication Critical patent/WO2017088664A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0605Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • G06F16/1824Distributed file systems implemented using Network-attached Storage [NAS] architecture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0643Management of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Definitions

  • the present invention relates to the field of data processing technologies, and in particular, to a data processing method and apparatus for a cluster file system.
  • cluster file systems have become a new trend in the development of computer technology.
  • the cluster file system provides a large-capacity file system shared storage application, such as CIFS, by concatenating a plurality of single independent hosts into a systematic whole and relying on the storage area network. Multiple concurrent user operations and big data transfers within the storage area network are implemented.
  • cluster nodes also increases the likelihood of node downtime or service failure. For example, when the client reads and writes a large file to the cluster, if the service node that accepts the read/write request fails, even if the service can switch to other nodes in time, since there is no data cache in the memory of other nodes, then the client The machine's read and write operations will still be interrupted.
  • the main object of the present invention is to provide a data processing method and apparatus for a cluster file system, which aims to solve the technical problem of interrupting the read and write operations of the client due to the failure of the cluster service node.
  • the present invention provides a data processing apparatus for a cluster file system, and the data processing apparatus of the cluster file system includes:
  • a storage module configured to: when the storage node receives the file read or write request, and the storage node where the storage module is located is the primary storage node, storing the request information in the read or write request into the cache Area;
  • a synchronization module configured to synchronize the request information to a slave storage node in the cluster file system
  • a data processing module configured to read data in the storage area according to the read request and the request information in the buffer area, or to the storage area according to the write request and the request information in the buffer area Write data in .
  • the present invention further provides a data processing method for a cluster file system, where the data processing method of the cluster file system includes:
  • the storage node determines whether data corresponding to the read or write request requires verification
  • the storage node acquires the verification information input by the user;
  • the storage node When the verification information matches the pre-stored verification information, the storage node reads data in its storage area according to the read request and the request information in the buffer area, or according to the write request and the The request information in the buffer area writes data to its storage area;
  • the storage node stores the request information in the read or write request into its buffer area
  • the storage node synchronizes the request information to a slave storage node in the cluster file system
  • the service request is not a read or write request, then responding to the service request.
  • the present invention further provides a data processing method for a cluster file system, where the data processing method of the cluster file system includes:
  • the storage node When the storage node receives a file read or write request, and the storage node is a primary storage node, the storage node reads the storage area according to the read request and the request information in the cache area. Data, or writing data to its storage area according to the write request and the request information in the buffer area;
  • the storage node stores the request information in the read or write request into its buffer area
  • the storage node synchronizes the request information to a slave storage node in the cluster file system.
  • the data processing method and apparatus of the cluster file system provided by the present invention, when the storage node receives a file read or write request, and the storage node is a primary storage node, the storage node according to the read request and the The request information in the buffer area reads data in its storage area, or writes data to its storage area according to the write request and the request information in the buffer area, and the storage node reads the data. Or the request information in the write request is stored in its cache area, and the storage node synchronizes the request information to the slave storage node in the cluster file system. After the primary storage node fails, the slave storage node may be in the cache area according to the cache node.
  • the service request information takes over the client's read or write process without interruption, ensuring the stability of the cluster file system service.
  • FIG. 1 is a schematic diagram of functional modules of a first embodiment of a data processing apparatus of a cluster file system according to the present invention
  • FIG. 2 is a schematic diagram of functional modules of a data processing apparatus of a cluster file system according to the present invention for processing other services such as reading and writing;
  • FIG. 3 is a schematic diagram of functional modules of a second embodiment of a data processing apparatus of a cluster file system according to the present invention.
  • FIG. 4 is a schematic diagram of a refinement function module of the detection module of FIG. 3;
  • FIG. 5 is a schematic diagram of functional modules of a third embodiment of a data processing apparatus of a cluster file system according to the present invention.
  • FIG. 6 is a schematic flowchart of a first embodiment of a data processing method of a cluster file system according to the present invention.
  • FIG. 7 is a schematic flowchart of a data processing method of a cluster file system according to the present invention for processing other services such as reading and writing data;
  • FIG. 8 is a schematic flowchart of a second embodiment of a data processing method of a cluster file system according to the present invention.
  • FIG. 9 is a schematic diagram of a refinement process of detecting a link connection state and an operation state of a primary storage node in FIG. 8;
  • FIG. 10 is a schematic flowchart diagram of a third embodiment of a data processing method of a cluster file system according to the present invention.
  • the present invention provides a data processing apparatus for a cluster file system.
  • FIG. 1 is a schematic diagram of functional modules of a first embodiment of a data processing apparatus of a cluster file system according to the present invention.
  • the data processing device of the cluster file system includes:
  • the data processing module 10 is configured to: when the storage node receives the file read or write request, read the data in the storage area according to the read request and the request information in the buffer area, or according to the writing The request and the request information in the buffer area write data to the storage area;
  • a storage module configured to store request information in the read or write request into its buffer area
  • the storage module 20 may acquire the configuration parameters on the control node at the initial startup, perform an initialization operation, and create the buffer area in each storage node. It can be understood that the configuration parameter may be set by the user or may be used during server manufacturing. Entered by the manufacturer, and the configuration parameters can be modified at any time and forwarded by the control node to all storage nodes.
  • the configuration parameter includes: a workgroup name, which is used to define a workgroup name of the cluster file system; a server name, which is used to define a name of each storage node of the cluster file system; and a maximum number of connections, which is used to define the most The number of clients that are allowed to access the cluster file system at the same time; the unified storage path is used to define the root directory of the cluster file system, that is, the same directory address or file can be accessed through any node address; the redundant network interface is used to define the cluster.
  • Each storage node of the file system is used as a redundant network interface, that is, a network card; a redundant network address is used to define a network address of each storage node of the cluster file system used as a redundant network interface (ie, an IP corresponding to the network card), Multiple addresses can be set; the redundant host address is used to define the public network address that the cluster file system provides access to the client, and only one can be set; the data cache size is used to define the storage file cache on each storage node of the cluster file system.
  • the memory size of the data For example, the user can set the storage area of the storage node to occupy 20 GB of storage space, and the storage space occupied by the storage area cannot exceed the maximum storage space of the storage node server.
  • the synchronization module 30 is configured to synchronize the request information to a slave storage node in the cluster file system.
  • the control node when the system initiates initialization, sends a message to the server that is set as the primary storage node, and controls the primary storage node to start the program, and the other storage nodes do not start the service program, and can only receive and A request information for storing a read or write request synchronized by the primary storage node, and only the primary storage node synchronizes the read or write request to all slaves in the cluster file system upon receiving a data read or write request
  • the storage node stores the requested information of the read or write request received from the storage node into its buffer area without any other operations.
  • the request information of the request After the primary storage node receives the data read or write request from the client, the request information of the request: the process number of the client data read or write, the I/O offset of the process, Synchronize to all slave storage nodes.
  • the storage node may determine the type of the service request when receiving the service request, and perform corresponding processing according to the type of the server request, that is, the data processing apparatus of the cluster file system further includes:
  • the detecting module 40 is configured to detect a type of the service request when receiving a service request, and determine whether the service request is a read or write request;
  • the response module 50 is configured to respond to the service request if the service request is not a read or write request;
  • the data processing module 10 is further configured to: if the service request is a read or write request, read data in the storage area according to the read request and the request information in the buffer area, or according to the The write request and the request information in the buffer area write data to the storage area.
  • the detecting module 40 detects the service type invoked by the request, and when the request invokes the data reading or writing service, the synchronization module 30 sends the requested request information to the
  • the primary storage node network interface is further sent by the network interface of the primary storage node to the secondary storage node corresponding to the node address obtained from the control node, and the received synchronization request is stored in the buffer area by the storage module 10 of the storage node .
  • the primary storage node not only receives the data read or write request, but also may receive other types of service requests, such as configuration parameter modification requests, and the response module 50 responds directly to the service request.
  • the response module 50 responds directly to the service request.
  • the storage module 20 does not store the service request into its buffer area, and the synchronization module 20 does not synchronize the service request to all the slave storage nodes. .
  • the primary storage node when the primary storage node may not be able to provide data reading or writing services to the client due to system upgrade or maintenance, etc., the primary storage node selects a normal working state. From the storage node, the selected storage node is switched from the storage node to the primary storage node, taking over the data read or write process of the client.
  • the primary storage node after receiving the data operation request, the primary storage node synchronizes the requested information to all the slave storage nodes, and when the storage node needs to be switched, any one of them is in a normal working state.
  • the slave storage node can take over the client's data operation process, provide the client with uninterrupted data operation services, and improve the availability of the cluster file system.
  • the data processing apparatus of the cluster file system further includes:
  • the detecting module 40 is further configured to detect a link connection state and an operating state thereof;
  • the detecting module 40 includes:
  • the link detecting unit 41 is configured to periodically send the first detection data packet to the control node;
  • the read/write detecting unit 42 is configured to determine whether the reading and writing are normal when the response packet is received within a preset time interval;
  • the determining unit 43 is configured to determine that the operation is normal when the reading and the writing are normal, determine the operation failure when the reading and the writing are abnormal, and determine that the response data packet is not received within the preset time interval.
  • the link connection is faulty.
  • the primary storage node selects a secondary storage node in a normal working state.
  • the selected storage node is switched from the storage node to the primary storage node, taking over the data read or write process of the client.
  • the switching module 60 is configured to: when detecting a link connection failure or an operation failure, select a primary storage node from the storage node in a normal working state, and mark the selected address of the secondary storage node as a primary storage node address;
  • An update module 70 configured to send the marked primary node address to the control node and the selected secondary storage node, where the control node updates the saved primary node address by using the received primary node address update And the selected storage node switches the working state to the state of the primary storage node when receiving the address of the primary node.
  • the switching module 60 selects a slave storage node in a normal working state, marks the selected address of the slave node as the address of the new primary storage node, and the update module 70 sends the address.
  • the control node will forward the subsequent client request to the new primary storage node, the selected secondary storage node will switch to the working state of the primary storage node, and the new primary storage node can read its cache area.
  • the internal data reads or writes the process information, directly takes over the corresponding process, and does not interrupt the data operation process of the client.
  • the performance of the slave storage node may be substantially the same, or the performance difference may be large. Therefore, when the primary storage node performs the handover, if all the performances of the slave storage nodes are similar, the switching module 60 may randomly select one to be in normal operation.
  • the slave storage node of the state is used as the primary storage node. If the performance of all the slave storage nodes is different, the node with higher performance in the normal working state may be selected as the new primary storage node according to the performance level of the storage node.
  • the data processing device of the cluster file system in this embodiment periodically detects the link connection state of the primary storage node and the service program running state, discovers the fault of the primary storage node in time, and then switches the primary storage node to provide uninterrupted support for the client.
  • Data read and write services ensure the stability of cluster file system data processing services.
  • the present invention further provides a third embodiment of the data processing apparatus of the cluster file system based on the first or second embodiment.
  • the data processing apparatus of the cluster file system also includes:
  • a determining module 80 configured to determine, when a file read or write request is received, whether data corresponding to the read or write request needs to be verified;
  • the primary storage node when the primary storage node receives the file read or write request, it detects the access level of the file requested to be accessed. If the file is a shared file, it can be accessed by all users. There is no need to verify the identity rights of the user. If the file is a private file, that is, the file can only be accessed by a user with permission, the identity permission of the user needs to be verified to determine whether the file can be issued. Client access to the request.
  • the obtaining module 90 is configured to obtain verification information input by the user when the data corresponding to the read or write request needs to be verified;
  • the primary storage node When the file requested by the client is a private file, that is, when authentication is required, the primary storage node needs to obtain information input by the user and verify the authority of the user. At this point, the user may have logged in, and the primary storage node only needs to traverse the information forwarded by the control node, grab the data with the user name and password key or handle, and then obtain the information input by the user, and perform the user's authority. If the user has not logged in at this time, the primary storage node sends a message to the control node, notifying the control node to send a reminder to the client or controlling the client to pop up the login interface. After the user inputs the verification information, the control node inputs the user. The verification information is forwarded to the primary storage node for verification.
  • the data processing module 10 is further configured to: when the verification information matches the pre-stored verification information, read data in the storage area according to the read request and the request information in the buffer area, or according to the The write request and the request information in the buffer area write data to its storage area.
  • the data processing apparatus of the cluster file system obtains the operation authority of the data after receiving the data operation request, and obtains the verification information input by the user if the data requires the authority verification, in the user When the input verification information matches the pre-stored verification information, the corresponding data operation is performed to improve the security of the cluster file system data.
  • the invention further provides a data processing method for a cluster file system.
  • the data processing method of the cluster file system includes:
  • Step S10 when the storage node receives the file read or write request, the storage node reads the data in the storage area according to the read request and the request information in the buffer area, or according to the write The incoming request and the request information in the buffer area write data to its storage area;
  • Step S20 the storage node stores the request information in the read or write request into its buffer area
  • the storage node may acquire the configuration parameters on the control node at the initial startup, perform an initialization operation, and create the cache area in the storage node. It can be understood that the configuration parameter may be set by the user, or may be set by the manufacturer during server manufacture. Enter.
  • configuration parameters are described in the first embodiment of the data processing of the cluster file system, and details are not described herein again.
  • the control node when the system initiates initialization, sends a message to the server that is set as the primary storage node, and controls the primary storage node to start the program, and the other storage nodes do not start the service program, and can only receive and A request information for storing a read or write request synchronized by the primary storage node, and only the primary storage node synchronizes the read or write request to all slaves in the cluster file system upon receiving a data read or write request
  • the storage node stores the requested information of the read or write request received from the storage node into its buffer area without any other operations.
  • the request information of the request After the primary storage node receives the data read or write request from the client, the request information of the request: the process number of the client data read or write, the I/O offset of the process, Synchronize to all slave storage nodes.
  • Step S30 the storage node synchronizes the request information to a slave storage node in the cluster file system.
  • the primary storage node After receiving the data read or write request, the primary storage node responds to the request and provides a data read or write service to the client according to the request information.
  • the primary storage node not only receives the data read or write request, but also may receive other types of service requests, such as configuration parameter modification requests, and directly respond to the request, as described above.
  • the corresponding parameter is requested to be modified accordingly, but the service request is not stored in its cache area, and the service request is not synchronized to all the slave storage nodes.
  • the storage node may determine the type of the service request when the service request is received, and perform corresponding processing according to the type of the server request, that is, the data processing method of the cluster file system further includes:
  • Step S40 When the storage node receives the service request, detecting a type of the service request.
  • Step S50 determining whether the service request is a read or write request
  • the service request is a read or write request, executing, by the storage node, reading data in a storage area according to the read request and request information in the buffer area, or according to the write request And a step of writing data in the storage area to the request information in the buffer area;
  • the data processing method of the cluster file system further includes:
  • Step S60 if the service request is not a read or write request, responding to the service request.
  • the primary storage node After receiving the service request, the primary storage node detects the service type invoked by the request, and sends the requested request information to the primary storage node network interface when the request invokes the data read or write service. Then, the network interface of the primary storage node sends the slave storage node corresponding to the node address acquired from the control node, and the synchronization request received from the storage node is stored in its buffer area.
  • the primary storage node when the primary storage node may not be able to provide data reading or writing services to the client due to system upgrade or maintenance, etc., the primary storage node selects a normal working state. From the storage node, the selected storage node is switched from the storage node to the primary storage node, taking over the data read or write process of the client.
  • the data processing method of the cluster file system in this embodiment after receiving the data operation request, the primary storage node synchronizes the requested information to all the slave storage nodes, and when the storage node needs to be switched, any one is in a normal working state.
  • the slave storage node can take over the client's data operation process, provide the client with uninterrupted data operation services, and improve the availability of the cluster file system.
  • the primary storage node when the primary storage node provides services to the client, a failure may occur, and the primary storage node needs to periodically detect the link connection status and the service program running status, so as to timely find the fault and perform corresponding processing, based on the first
  • An embodiment provides a second embodiment of the data processing method of the cluster file system of the present invention.
  • the data processing method of the cluster file system further includes:
  • Step S70 When the storage node is a primary storage node, the storage node detects a link connection state thereof and an operation state of the storage node.
  • the step of detecting, by the storage node, its link connection state and the running state of the storage node includes:
  • Step S71 the storage node periodically sends a first detection data packet to the control node
  • Step S72 Receive a second response data packet that is sent back by the control node based on the detection data packet:
  • Step S73 when receiving the response data packet within a preset time interval, determining whether the reading and writing of the storage node are normal;
  • Step S74 when the reading and writing of the storage node are normal, determining that the storage node is operating normally;
  • Step S75 determining that the storage node is faulty when the storage node reads and writes an abnormality
  • Step S76 When the response data packet is not received within the preset time interval, determine that the link connection of the storage node is faulty.
  • the primary storage node when the primary storage node detects that it is faulty and cannot continue to provide data reading or writing services to the client, the primary storage node selects a secondary storage node that is in a normal working state, and the selected slave will be selected. The storage node switches to the primary storage node and takes over the data read or write process of the client.
  • Step S80 when the acquiring storage node detects its link connection failure or the storage node running fault in real time, the storage node selects a primary storage node from the storage node in a normal working state, and selects the selected storage node.
  • the address from the storage node is marked as the primary storage node address;
  • Step S90 sending the marked primary node address to the control node and the selected secondary storage node, wherein the control node updates the saved primary node address by using the received primary node address, and selects When the storage node receives the address of the primary node, the working state is switched to the state of the primary storage node.
  • a slave storage node in a normal working state is selected, the selected address of the slave service node is marked as the address of the new primary storage node, and the address is sent to the control node, and the control node will The subsequent client request will be forwarded to the new primary storage node, and the selected secondary storage node will switch to the working state of the primary storage node, and the new primary storage node can read the data in the buffer to read or write.
  • the process information directly take over the corresponding process, will not interrupt the client's data operation process.
  • the performance of the slave storage node may be substantially the same, or the performance difference may be large. Therefore, when the primary storage node performs the handover, if the performance of all the slave storage nodes is almost the same, the slaves in the normal working state may be randomly selected. As the primary storage node, if the performance of all the secondary storage nodes is different, the node with higher performance in the normal working state can be selected as the new primary storage node according to the performance of the storage node.
  • the data processing method of the cluster file system in this embodiment periodically detects the link connection state of the primary storage node and the service program running state, discovers the fault of the primary storage node in time, and then switches the primary storage node to provide uninterrupted support for the client.
  • Data read and write services ensure the stability of cluster file system data processing services.
  • the present invention further provides a third embodiment of the data processing method of the cluster file system based on the first or second embodiment.
  • the The data processing method of the cluster file system further includes the steps of:
  • Step S100 the storage node determines whether data corresponding to the read or write request needs to be verified
  • the primary storage node when the primary storage node receives the file read or write request, it detects the access level of the file requested to be accessed. If the file is a shared file, it can be accessed by all users. There is no need to verify the identity rights of the user. If the file is a private file, that is, the file can only be accessed by a user with permission, the identity permission of the user needs to be verified to determine whether the file can be issued. Client access to the request.
  • Step S110 when the data corresponding to the read or write request needs to be verified, the storage node acquires the verification information input by the user;
  • the primary storage node When the file requested by the client is a private file, that is, when authentication is required, the primary storage node needs to obtain information input by the user and verify the authority of the user. At this point, the user may have logged in, and the primary storage node only needs to traverse the information forwarded by the control node, grab the data with the user name and password key or handle, and then obtain the authentication information input by the user, and the authority to the user. If the user has not logged in yet, the primary storage node sends a message to the control node, notifying the control node to send a permission verification reminder to the client or controlling the client to pop up the login interface. After the user inputs the verification information, the control node will The verification information input by the user is forwarded to the primary storage node for verification.
  • Step S120 determining whether the verification information matches the pre-stored verification information
  • step S10 When the verification information matches the pre-stored verification information, executing the storage node to read data in the storage area according to the read request and the request information in the buffer area, or according to the write request and The step of writing the request information in the buffer area to the data area thereof, that is, step S10.
  • the data processing method of the cluster file system acquires the operation authority of the data after receiving the data operation request, and if the data requires the authority verification, the verification information input by the user is obtained, and the user inputs the When the verification information matches the pre-stored verification information, the corresponding data operation is performed to improve the security of the cluster file system data.
  • the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better.
  • Implementation Based on such understanding, the technical solution of the present invention, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes instructions for causing a terminal device (which may be a mobile phone, a computer, a cloud server, an air conditioner, or a network device, etc.) to perform the methods described in various embodiments of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)

Abstract

一种集群文件***的数据处理装置,所述集群文件***的数据处理装置包括:数据处理模块,用于在存储节点接收到文件读取或写入请求时,根据所述读取请求以及所述缓存区中的请求信息读取存储区中的数据,或者根据所述写入请求以及所述缓存区中的请求信息向存储区中写入数据;存储模块,用于将所述读取或写入请求中的请求信息存入缓存区;同步模块,用于将所述请求信息同步至集群文件***中的从存储节点。还提出一种集群文件***的数据处理方法。该装置和方法在主存储节点故障后,从存储节点可根据其缓存区内的服务请求信息,无间断的接管客户端的读取或写入进程,保证集群文件***服务的稳定性。

Description

集群文件***的数据处理方法和装置
技术领域
本发明涉及数据处理技术领域,尤其涉及一种集群文件***的数据处理方法和装置。
背景技术
近年来,集群文件***已经成为计算机技术发展的新趋势。集群文件***通过将多个单一独立的主机有机的结合串联成一个***性的整体,并依托存储区域网络,对外提供一个大容量文件***的共享存储应用,如CIFS。实现了存储区域网络内的多并发用户操作和大数据传输。
虽然如此,集群节点的增加同时也增加了节点宕机或服务故障的可能性。比如,当客户机在向集群中读写一个较大文件时,如果受理该读写请求的服务节点故障,即使服务能及时切换到其他节点,由于其他节点的内存中并没有数据缓存,那么客户机的读写操作还是会中断。
发明内容
本发明的主要目的在于提供一种集群文件***的数据处理方法和装置,旨在解决由于集群服务节点故障,使客户端读写操作中断的技术问题。
为实现上述目的,本发明提供的一种集群文件***的数据处理装置,所述集群文件***的数据处理装置包括:
存储模块,用于在存储节点接收到文件读取或写入请求,且所述存储模块所在的存储节点为主存储节点时,将所述读取或写入请求中的请求信息存入其缓存区;
同步模块,用于将所述请求信息同步至集群文件***中的从存储节点;
数据处理模块,用于根据所述读取请求以及所述缓存区中的请求信息读取其存储区中的数据,或者根据所述写入请求以及所述缓存区中的请求信息向其存储区中写入数据。
本发明进一步提供一种集群文件***的数据处理方法,所述集群文件***的数据处理方法包括:
在所述存储节点接收服务请求时,检测所述服务请求的类型;
判断所述服务请求是否为读取或写入请求;
若所述服务请求为读取或写入请求,则所述存储节点确定所述读取或写入请求对应的数据是否需要验证;
在所述读取或写入请求对应的数据需要验证时,所述存储节点获取用户输入的验证信息;
在所述验证信息与预存的验证信息匹配时,所述存储节点根据所述读取请求以及所述缓存区中的请求信息读取其存储区中的数据,或者根据所述写入请求以及所述缓存区中的请求信息向其存储区中写入数据;
所述存储节点将所述读取或写入请求中的请求信息存入其缓存区;
所述存储节点将所述请求信息同步至集群文件***中的从存储节点;
若所述服务请求不是读取或写入请求,则响应所述服务请求。
本发明进一步提供一种集群文件***的数据处理方法,所述集群文件***的数据处理方法包括:
在存储节点接收到文件读取或写入请求,且所述存储节点为主存储节点时,所述存储节点根据所述读取请求以及所述缓存区中的请求信息读取其存储区中的数据,或者根据所述写入请求以及所述缓存区中的请求信息向其存储区中写入数据;
所述存储节点将所述读取或写入请求中的请求信息存入其缓存区;
所述存储节点将所述请求信息同步至集群文件***中的从存储节点。
本发明提出的集群文件***的数据处理方法和装置,在存储节点接收到文件读取或写入请求,且所述存储节点为主存储节点时,所述存储节点根据所述读取请求以及所述缓存区中的请求信息读取其存储区中的数据,或者根据所述写入请求以及所述缓存区中的请求信息向其存储区中写入数据,所述存储节点将所述读取或写入请求中的请求信息存入其缓存区,所述存储节点将所述请求信息同步至集群文件***中的从存储节点,在主存储节点故障后,从存储节点可根据其缓存区内的服务请求信息无间断的接管客户端的读取或写入进程,保证集群文件***服务的稳定性。
附图说明
图1为本发明集群文件***的数据处理装置第一实施例的功能模块示意图;
图2为本发明集群文件***的数据处理装置处理读写外其他服务时的功能模块示意图;
图3为本发明集群文件***的数据处理装置第二实施例的功能模块示意图;
图4为图3中检测模块的细化功能模块示意图;
图5为本发明集群文件***的数据处理装置第三实施例的功能模块示意图;
图6为本发明集群文件***的数据处理方法第一实施例的流程示意图;
图7为本发明集群文件***的数据处理方法处理数据读写外其他服务时的流程示意图;
图8为本发明集群文件***的数据处理方法第二实施例的流程示意图;
图9为图8中主存储节点检测链路连接状态和运行状态的细化流程示意图;
图10为本发明集群文件***的数据处理方法第三实施例的流程示意图。
本发明目的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
本发明提供一种集群文件***的数据处理装置。
参照图1,图1为本发明集群文件***的数据处理装置第一实施例的功能模块示意图。
在本实施例中,所述集群文件***的数据处理装置包括 :
数据处理模块10,用于在存储节点接收到文件读取或写入请求时,根据所述读取请求以及所述缓存区中的请求信息读取存储区中的数据,或者根据所述写入请求以及所述缓存区中的请求信息向存储区中写入数据;
存储模块20,用于将所述读取或写入请求中的请求信息存入其缓存区;
存储模块20可在初次启动时获取控制节点上的配置参数,进行初始化操作,在各个存储节点创建所述缓存区,可以理解的是,所述配置参数可由用户自行设置,也可在服务器制造时由厂家录入,且所述配置参数可在任何时间进行修改,并由控制节点转发至所有存储节点。
在本实施例中,所述配置参数包括:工作组名称,用于定义集群文件***的工作组名称;服务器名称,用于定义集群文件***各个存储节点的名称;最大连接数,用于定义最多允许同时访问集群文件***的客户机数量;统一存储路径,用于定义集群文件***的根目录,即通过任一节点地址都能访问到同一目录地址或文件;冗余网络接口,用于定义集群文件***的各个存储节点用做冗余的网络接口,即网卡;冗余网络地址,用于定义集群文件***的各个存储节点用做冗余网络接口的网络地址(即该网卡对应的IP),可以设置多个;冗余主机地址,用于定义集群文件***面向客户端提供访问的公共网络地址,仅可设置一个;数据缓存区大小,用于定义集群文件***各个存储节点上用于存放缓存数据的内存大小。例如用户可以设置存储节点的缓存区占用20GB的存储空间,所述缓存区占用的存储空间不得超过存储节点服务器的内存最大存储空间。
同步模块30,用于将所述请求信息同步至集群文件***中的从存储节点.
在本实施例中,控制节点在***启动初始化时,会向被设置为主存储节点的服务器发送消息,控制所述主存储节点启动程序,而其他存储节点不会启动服务程序,只能接收并存储主存储节点同步的读取或写入请求的请求信息,只有主存储节点在接收到数据读取或写入请求时,将所述读取或写入请求同步至集群文件***中的所有从存储节点,从存储节点仅仅将接收到的所述读取或写入请求的请求信息存入其缓存区,而不进行其他任何操作。
在主存储节点接收到客户端的数据读取或写入请求后,会将所述请求的请求信息:如客户端数据读取或写入的进程号、所述进程的I/O偏移量,同步至所有从存储节点。
参照图2,存储节点可在接收到服务请求时,确定服务请求的类型,根据服务器请求的类型进行相应的处理,即所述集群文件***的数据处理装置还包括:
检测模块40,用于接收服务请求时,检测所述服务请求的类型,判断所述服务请求是否为读取或写入请求;
响应模块50,用于若所述服务请求不是读取或写入请求,则响应所述服务请求;
所述数据处理模块10,还用于若所述服务请求是读取或写入请求,则根据所述读取请求以及所述缓存区中的请求信息读取存储区中的数据,或者根据所述写入请求以及所述缓存区中的请求信息向存储区中写入数据。
在主存储节点接收到服务请求后,检测模块40会检测所述请求调用的服务类型,在所述请求调用数据读取或写入服务时,同步模块30才将所述请求的请求信息发送至主存储节点网络接口,再由主存储节点的网络接口发送至从控制节点获取到的节点地址对应的从存储节点,并由从存储节点的存储模块10将接收到同步请求存入其缓存区内。
可以理解的是,主存储节点不仅仅会接收到数据读取或写入请求,也有可能会接收到其他类型的服务请求,如配置参数修改请求,此时响应模块50会响应所述服务请求直接响应所述请求,如对所述请求对应的参数进行相应的修改,但该存储模块20不会将服务请求存储至其缓存区内,且同步模块20不会将服务请求同步至所有从存储节点。
可以理解的是,在本实施例中,由于***升级或维护等原因,所述主存储节点可能无法继续为客户端提供数据读取或写入服务时,主存储节点会选择一处于正常工作状态的从存储节点,将被选取的从存储节点切换为主存储节点,接管客户端的数据读取或写入进程。
本实施例提出的集群文件***的数据处理装置,主存储节点在接收到数据操作请求后,将所述请求的信息同步至所有从存储节点,在需要切换存储节点时,任一处于正常工作状态的从存储节点均可接管客户端的数据操作进程,为客户端提供无间断的数据操作服务,提高集群文件***的可用性。
进一步地,在主存储节点为客户端提供服务时,有可能发生故障,则主存储节点需定时检测其链路连接状态和服务程序运行状态,以便及时发现故障,并作出相应处理,则基于第一实施例提出本发明集群文件***的数据处理装置第二实施例,参照图3,所述集群文件***的数据处理装置还包括:
所述检测模块40,还用于检测其链路连接状态和运行状态;
参照图4,所述检测模块40包括:
链路检测单元41,用于定时向所述控制节点发送第一检测数据包;
接收所述控制节点基于所述检测数据包反馈的第二响应数据包:
读写检测单元42,用于在预设时间间隔内接收到所述响应数据包时,判断读取以及写入是否正常;
判定单元43,用于在读取以及写入正常时,判定运行正常,在读取以及写入异常时,判定运行故障,以及在预设时间间隔内未接收到所述响应数据包时,判定链路连接故障。
在本实施例中,在检测模块40检测到所述主存储节点出现故障,无法继续为客户端提供数据读取或写入服务时,主存储节点会选择一处于正常工作状态的从存储节点,将被选取的从存储节点切换为主存储节点,接管客户端的数据读取或写入进程。
切换模块60,用于检测到链路连接故障或运行故障时,在处于正常工作状态的从存储节点中选取主存储节点,将选取的所述从存储节点的地址标记为主存储节点地址;
更新模块70,用于将标记的所述主节点地址发送至控制节点以及选取的所述从存储节点,其中,所述控制节点采用接收到的所述主节点地址更新保存的所述主节点地址,且选取的所述存储节点接收到所述主节点地址时,将工作状态切换为主存储节点状态。
在需要切换存储节点时,切换模块60会选取一处于正常工作状态的从存储节点,将所述被选取的从服务节点的地址标记为新的主存储节点的地址,更新模块70将其地址发送至控制节点,控制节点将会把之后的客户端请求转发至新的主存储节点,所述被选取的从存储节点会切换为主存储节点工作状态,新的主存储节点可以读取其缓存区内的数据读取或写入进程信息,直接接管相应的进程,不会使客户端的数据操作进程中断。
可以理解的是,从存储节点的性能可能大致相同,也可能性能差异较大,所以在主存储节点进行切换时,若所有从存储节点性能相差无几,则切换模块60可以随机选取一处于正常工作状态的从存储节点作为主存储节点,若所有从存储节点的性能差异较大,则可按照从存储节点的性能高低顺序选择一处于正常工作状态的性能较高的节点作为新的主存储节点。
本实施例提出的集群文件***的数据处理装置,定时检测主存储节点的链路连接状态和服务程序运行状态,及时发现主存储节点的故障,进而切换主存储节点,为客户端提供无间断的数据读写服务,确保集群文件***数据处理服务的稳定性。
进一步地,为提高集群文件***的安全性,基于第一或第二实施例本发明还提出集群文件***的数据处理装置的第三实施例,参照图5,所述集群文件***的数据处理装置还包括:
确定模块80,用于在接收到文件读取或写入请求时,确定所述读取或写入请求对应的数据是否需要验证;
在本实施例中,主存储节点接收到文件读取或写入请求时,会检测所述被请求访问的文件的访问级别,若所述文件为共享文件,即可以被所有用户访问时,则不需要对用户的身份权限进行验证,若所述文件为私有文件,即所述文件仅仅可以被有许可权限的用户访问,则需要对用户的身份权限进行验证,确定所述文件是否可被发出访问请求的客户端访问。
获取模块90,用于在所述读取或写入请求对应的数据需要验证时,获取用户输入的验证信息;
在客户端请求访问的文件为私有文件,即需要验证时,主存储节点需要获取用户输入的信息,对用户的权限进行验证。此时,用户可能已进行登录操作,主存储节点只需要对控制节点转发的信息进行遍历,抓取带有用户名和密码关键字或句柄的数据,进而获取用户输入的信息,对用户的权限进行验证;若此时用户尚未登录,主存储节点会向控制节点发送消息,通知控制节点向客户端发送提醒或者控制客户端弹出登录界面,在用户输入验证信息后,由控制节点将所述用户输入的验证信息转发至主存储节点进行验证。
所述数据处理模块10,还用于在所述验证信息与预存的验证信息匹配时,根据所述读取请求以及所述缓存区中的请求信息读取其存储区中的数据,或者根据所述写入请求以及所述缓存区中的请求信息向其存储区中写入数据。
本实施例提出的所述集群文件***的数据处理装置,在接收到数据操作请求后,会获取所述数据的操作权限,若所述数据需要权限验证,则获取用户输入的验证信息,在用户输入的验证信息与预存的验证信息匹配时,才执行相应的数据操作,提高集群文件***数据的安全性。
本发明进一步提供一种集群文件***的数据处理方法。
参照图6,所述集群文件***的数据处理方法包括:
步骤S10,在存储节点接收到文件读取或写入请求时,所述存储节点根据所述读取请求以及所述缓存区中的请求信息读取其存储区中的数据,或者根据所述写入请求以及所述缓存区中的请求信息向其存储区中写入数据;
步骤S20,所述存储节点将所述读取或写入请求中的请求信息存入其缓存区;
存储节点可在初次启动时获取控制节点上的配置参数,进行初始化操作,在存储节点创建所述缓存区,可以理解的是,所述配置参数可由用户自行设置,也可在服务器制造时由厂家录入。
在本实施例中,所述配置参数参照集群文件***的数据处理第一实施例所述,在此不再赘述。
在本实施例中,控制节点在***启动初始化时,会向被设置为主存储节点的服务器发送消息,控制所述主存储节点启动程序,而其他存储节点不会启动服务程序,只能接收并存储主存储节点同步的读取或写入请求的请求信息,只有主存储节点在接收到数据读取或写入请求时,将所述读取或写入请求同步至集群文件***中的所有从存储节点,从存储节点仅仅将接收到的所述读取或写入请求的请求信息存入其缓存区,而不进行其他任何操作。
在主存储节点接收到客户端的数据读取或写入请求后,会将所述请求的请求信息:如客户端数据读取或写入的进程号、所述进程的I/O偏移量,同步至所有从存储节点。
步骤S30,所述存储节点将所述请求信息同步至集群文件***中的从存储节点。
主存储节点在接收到数据读取或写入请求后,会响应所述请求,根据所述请求信息,为客户端提供数据读取或写入服务。
可以理解的是,主存储节点不仅仅会接收到数据读取或写入请求,也有可能会接收到其他类型的服务请求,如配置参数修改请求,此时直接响应所述请求,如对所述请求对应的参数进行相应的修改,但不会将服务请求存储至其缓存区内,且不会将服务请求同步至所有从存储节点。
参照图7,存储节点可在接收到服务请求时,确定服务请求的类型,根据服务器请求的类型进行相应的处理,即所述集群文件***的数据处理方法还包括:
步骤S40,在所述存储节点接收服务请求时,检测所述服务请求的类型;
步骤S50,判断所述服务请求是否为读取或写入请求;
若所述服务请求为读取或写入请求,则执行所述存储节点根据所述读取请求以及所述缓存区中的请求信息读取其存储区中的数据,或者根据所述写入请求以及所述缓存区中的请求信息向其存储区中写入数据的步骤;
所述步骤S50之后,所述集群文件***的数据处理方法还包括:
步骤S60,若所述服务请求不是读取或写入请求,则响应所述服务请求。
在主存储节点接收到服务请求后,会检测所述请求调用的服务类型,在所述请求调用数据读取或写入服务时,才将所述请求的请求信息发送至主存储节点网络接口,再由主存储节点的网络接口发送至从控制节点获取到的节点地址对应的从存储节点,并由从存储节点将接收到同步请求存入其缓存区内。
可以理解的是,在本实施例中,由于***升级或维护等原因,所述主存储节点可能无法继续为客户端提供数据读取或写入服务时,主存储节点会选择一处于正常工作状态的从存储节点,将被选取的从存储节点切换为主存储节点,接管客户端的数据读取或写入进程。
本实施例提出的集群文件***的数据处理方法,主存储节点在接收到数据操作请求后,将所述请求的信息同步至所有从存储节点,在需要切换存储节点时,任一处于正常工作状态的从存储节点均可接管客户端的数据操作进程,为客户端提供无间断的数据操作服务,提高集群文件***的可用性。
进一步地,在主存储节点为客户端提供服务时,有可能发生故障,则主存储节点需定时检测其链路连接状态和服务程序运行状态,以便及时发现故障,并作出相应处理,则基于第一实施例提出本发明集群文件***的数据处理方法第二实施例,在本实施例中,参照图8,所述集群文件***的数据处理方法还包括:
步骤S70,在所述存储节点为主存储节点时,所述存储节点检测其链路连接状态和所述存储节点的运行状态;
参照图9,所述存储节点检测其链路连接状态和所述存储节点的运行状态的步骤包括:
步骤S71,所述存储节点定时向所述控制节点发送第一检测数据包;
步骤S72,接收所述控制节点基于所述检测数据包反馈的第二响应数据包:
步骤S73,在预设时间间隔内接收到所述响应数据包时,判断所述存储节点的读取以及写入是否正常;
步骤S74,在所述存储节点的读取以及写入正常时,判定所述存储节点运行正常;
步骤S75,在所述存储节点的读取以及写入异常时,判定所述存储节点运行故障;
步骤S76,在预设时间间隔内未接收到所述响应数据包时,判定所述存储节点的链路连接故障。
在本实施例中,主存储节点检测到其出现故障,无法继续为客户端提供数据读取或写入服务时,主存储节点会选择一处于正常工作状态的从存储节点,将被选取的从存储节点切换为主存储节点,接管客户端的数据读取或写入进程。
步骤S80,在所述获取存储节点实时检测到其链路连接故障或所述存储节点运行故障时,所述存储节点在处于正常工作状态的从存储节点中选取主存储节点,将选取的所述从存储节点的地址标记为主存储节点地址;
步骤S90,将标记的所述主节点地址发送至控制节点以及选取的所述从存储节点,其中,所述控制节点采用接收到的所述主节点地址更新保存的所述主节点地址,且选取的所述存储节点接收到所述主节点地址时,将工作状态切换为主存储节点状态。
在需要切换存储节点时,选取一处于正常工作状态的从存储节点,将所述被选取的从服务节点的地址标记为新的主存储节点的地址,将其地址发送至控制节点,控制节点将会把之后的客户端请求转发至新的主存储节点,所述被选取的从存储节点会切换为主存储节点工作状态,新的主存储节点可以读取其缓存区内的数据读取或写入进程信息,直接接管相应的进程,不会使客户端的数据操作进程中断。
可以理解的是,从存储节点的性能可能大致相同,也可能性能差异较大,所以在主存储节点进行切换时,若所有从存储节点性能相差无几,则可以随机选取一处于正常工作状态的从存储节点作为主存储节点,若所有从存储节点的性能差异较大,则可按照从存储节点的性能高低顺序选择一处于正常工作状态的性能较高的节点作为新的主存储节点。
本实施例提出的集群文件***的数据处理方法,定时检测主存储节点的链路连接状态和服务程序运行状态,及时发现主存储节点的故障,进而切换主存储节点,为客户端提供无间断的数据读写服务,确保集群文件***数据处理服务的稳定性。
进一步地,为提高集群文件***的安全性性,基于第一或第二实施例本发明还提出集群文件***的数据处理方法的第三实施例,参照图10,所述步骤S10之前,所述集群文件***的数据处理方法还包括步骤:
步骤S100,所述存储节点确定所述读取或写入请求对应的数据是否需要验证;
在本实施例中,主存储节点接收到文件读取或写入请求时,会检测所述被请求访问的文件的访问级别,若所述文件为共享文件,即可以被所有用户访问时,则不需要对用户的身份权限进行验证,若所述文件为私有文件,即所述文件仅仅可以被有许可权限的用户访问,则需要对用户的身份权限进行验证,确定所述文件是否可被发出访问请求的客户端访问。
步骤S110,在所述读取或写入请求对应的数据需要验证时,所述存储节点获取用户输入的验证信息;
在客户端请求访问的文件为私有文件,即需要验证时,主存储节点需要获取用户输入的信息,对用户的权限进行验证。此时,用户可能已进行登录操作,主存储节点只需要对控制节点转发的信息进行遍历,抓取带有用户名和密码关键字或句柄的数据,进而获取用户输入的验证信息,对用户的权限进行验证;若此时用户尚未登录,主存储节点会向控制节点发送消息,通知控制节点向客户端发送权限验证提醒或者控制客户端弹出登录界面,在用户输入验证信息后,由控制节点将所述用户输入的验证信息转发至主存储节点进行验证。
步骤S120,判断所述验证信息与预存的验证信息是否匹配;
在所述验证信息与预存的验证信息匹配时,执行所述存储节点根据所述读取请求以及所述缓存区中的请求信息读取其存储区中的数据,或者根据所述写入请求以及所述缓存区中的请求信息向其存储区中写入数据的步骤,即步骤S10。
本实施例提出的集群文件***的数据处理方法,在接收到数据操作请求后,会获取所述数据的操作权限,若所述数据需要权限验证,则获取用户输入的验证信息,在用户输入的验证信息与预存的验证信息匹配时,才执行相应的数据操作,提高集群文件***数据的安全性。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,云端服务器,空调器,或者网络设备等)执行本发明各个实施例所述的方法。
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (20)

  1. 一种集群文件***的数据处理装置,其特征在于,所述集群文件***的数据处理装置包括 :
    数据处理模块,用于在存储节点接收到文件读取或写入请求时,根据所述读取请求以及所述缓存区中的请求信息读取存储区中的数据,或者根据所述写入请求以及所述缓存区中的请求信息向存储区中写入数据
    存储模块,用于将所述读取或写入请求中的请求信息存入缓存区;
    同步模块,用于将所述请求信息同步至集群文件***中的从存储节点。
  2. 如权利要求 1 所述的集群文件***的数据处理装置,其特征在于,所述集群文件***的数据处理装置还包括:
    检测模块,用于接收服务请求时,检测所述服务请求的类型,判断所述服务请求是否为读取或写入请求;
    响应模块,用于若所述服务请求不是读取或写入请求,则响应所述服务请求;
    所述数据处理模块,还用于若所述服务请求是读取或写入请求,则根据所述读取请求以及所述缓存区中的请求信息读取存储区中的数据,或者根据所述写入请求以及所述缓存区中的请求信息向存储区中写入数据。
  3. 如权利要求 1 所述的集群文件***的数据处理装置,其特征在于,所述集群文件***的数据处理装置还包括:
    确定模块,用于在接收到文件读取或写入请求时,确定所述读取或写入请求对应的数据是否需要验证;
    获取模块,用于在所述读取或写入请求对应的数据需要验证时,获取用户输入的验证信息;
    所述数据处理模块,还用于在所述验证信息与预存的验证信息匹配时,根据所述读取请求以及所述缓存区中的请求信息读取存储区中的数据,或者根据所述写入请求以及所述缓存区中的请求信息向存储区中写入数据。
  4. 如权利要求3所述的集群文件***的数据处理装置,其特征在于,所述确定模块还用于,检测所述读取或写入请求对应的文件的访问级别,以确定所述读取或写入请求对应的数据是否需要验证,其中,在所述文件为共享文件时,确定所述读取或写入请求对应的数据不需要验证,在所述文件为私有文件时,确定所述读取或写入请求对应的数据需要验证。
  5. 如权利要求2所述的集群文件***的数据处理装置,其特征在于,所述集群文件***的数据处理装置还包括:
    所述检测模块,还用于检测链路连接状态和运行状态;
    切换模块,用于在检测到链路连接故障或运行故障时,在处于正常工作状态的从存储节点中选取主存储节点,将选取的所述从存储节点的地址标记为主存储节点地址;
    更新模块,用于将标记的所述主存储节点地址发送至控制节点以及选取的所述从存储节点,其中,所述控制节点采用接收到的所述主存储节点地址更新保存的所述主存储节点地址,且选取的所述存储节点接收到所述主存储节点地址时,将工作状态切换为主存储节点状态。
  6. 如权利要求5所述的集群文件***的数据处理装置,其特征在于,所述检测模块包括:
    链路检测单元,用于定时向所述控制节点发送第一检测数据包以及接收所述控制节点基于所述检测数据包反馈的第二响应数据包;
    读写检测单元,用于在预设时间间隔内接收到所述响应数据包时,判断读取以及写入是否正常;
    判定单元,用于在读取以及写入正常时,判定运行正常,在读取以及写入异常时,判定运行故障,以及在预设时间间隔内未接收到所述响应数据包时,判定链路连接故障。
  7. 如权利要求1所述的集群文件***的数据处理装置,其特征在于,所述存储模块还用于在所述存储节点初次启动时,获取控制节点上的配置参数并进行初始化操作,创建缓存区。
  8. 一种集群文件***的数据处理方法,其特征在于,所述集群文件***的数据处理方法包括:
    在所述存储节点接收服务请求时,检测所述服务请求的类型;
    判断所述服务请求是否为读取或写入请求;
    若所述服务请求为读取或写入请求,则所述存储节点确定所述读取或写入请求对应的数据是否需要验证;
    在所述读取或写入请求对应的数据需要验证时,所述存储节点获取用户输入的验证信息;
    在所述验证信息与预存的验证信息匹配时,所述存储节点根据所述读取请求以及所述缓存区中的请求信息读取其存储区中的数据,或者根据所述写入请求以及所述缓存区中的请求信息向其存储区中写入数据;
    所述存储节点将所述读取或写入请求中的请求信息存入其缓存区;
    所述存储节点将所述请求信息同步至集群文件***中的从存储节点;
    若所述服务请求不是读取或写入请求,则响应所述服务请求。
  9. 如权利要求8所述的集群文件***的数据处理方法,其特征在于,所述集群文件***的数据处理方法还包括:
    在所述存储节点为主存储节点时,所述存储节点检测其链路连接状态和所述存储节点的运行状态;
    在所述获取存储节点实时检测到其链路连接故障或所述存储节点运行故障时,所述存储节点在处于正常工作状态的从存储节点中选取主存储节点,将选取的所述从存储节点的地址标记为主存储节点地址;
    将标记的所述主存储节点地址发送至控制节点以及选取的所述从存储节点,其中,所述控制节点采用接收到的所述主存储节点地址更新保存的所述主存储节点地址,且选取的所述存储节点接收到所述主存储节点地址时,将工作状态切换为主存储节点状态。
  10. 如权利要求9所述的集群文件***的数据处理的方法,其特征在于,所述存储节点检测其链路连接状态和所述存储节点的运行状态的步骤包括:
    所述存储节点定时向所述控制节点发送第一检测数据包;
    接收所述控制节点基于所述检测数据包反馈的第二响应数据包:
    在预设时间间隔内接收到所述响应数据包时,判断所述存储节点的读取以及写入是否正常;
    在所述存储节点的读取以及写入正常时,判定所述存储节点运行正常,在所述存储节点的读取以及写入异常时,判定所述存储节点运行故障;
    在预设时间间隔内未接收到所述响应数据包时,判定所述存储节点的链路连接故障。
  11. 如权利要求8所述的集群文件***的数据处理的方法,其特征在于,所述集群文件***的数据处理的方法还包括:
    在所述存储节点初次启动时,所述存储节点获取控制节点上的配置参数并进行初始化操作,创建缓存区。
  12. 如权利要求8所述的集群文件***的数据处理的方法,其特征在于,所述存储节点确定所述读取或写入请求对应的数据是否需要验证的步骤包括:
    检测所述读取或写入请求对应的文件的访问级别,以判断所述读取或写入请求对应的数据是否需要验证,其中,在所述文件为共享文件时,判定所述读取或写入请求对应的数据不需要验证,在所述文件为私有文件时,判定所述读取或写入请求对应的数据需要验证。
  13. 如权利要求8所述的集群文件***的数据处理的方法,其特征在于,所述存储节点获取用户输入的验证信息的步骤包括:
    在用户已经进行登录操作时,所述存储节点对控制节点转发的信息进行遍历,抓取带有用户名和密码关键字或句柄的数据以获取用户输入的验证信息;
    在用户未进行登录操作时,所述存储节点向所述控制节点发送消息,并接收所述控制节点反馈的验证信息,其中,所述控制节点在接收到所述消息时,向客户端发送提醒或者控制客户端弹出登录界面,并将用户输入的所述验证信息发送至所述存储节点。
  14. 一种集群文件***的数据处理方法,其特征在于,所述集群文件***的数据处理方法包括:
    在存储节点接收到文件读取或写入请求时,所述存储节点根据所述读取请求以及所述缓存区中的请求信息读取其存储区中的数据,或者根据所述写入请求以及所述缓存区中的请求信息向其存储区中写入数据;
    所述存储节点将所述读取或写入请求中的请求信息存入其缓存区;
    所述存储节点将所述请求信息同步至集群文件***中的从存储节点。
  15. 如权利要求14所述的集群文件***的数据处理方法,其特征在于,所述存储节点根据所述读取请求以及所述缓存区中的请求信息读取其存储区中的数据,或者根据所述写入请求以及所述缓存区中的请求信息向其存储区中写入数据的步骤之前还包括:
    在所述存储节点接收服务请求时,检测所述服务请求的类型;
    判断所述服务请求是否为读取或写入请求;
    若所述服务请求为读取或写入请求,则执行所述存储节点根据所述读取请求以及所述缓存区中的请求信息读取其存储区中的数据,或者根据所述写入请求以及所述缓存区中的请求信息向其存储区中写入数据的步骤;
    所述判断所述服务请求是否为读取或写入请求的步骤之后,所述集群文件***的数据处理方法还包括:
    若所述服务请求不是读取或写入请求,则响应所述服务请求。
  16. 如权利要求14所述的集群文件***的数据处理的方法,其特征在于,所述存储节点根据所述读取请求以及所述缓存区中的请求信息读取其存储区中的数据,或者根据所述写入请求以及所述缓存区中的请求信息向其存储区中写入数据的步骤之前,所述集群文件***的数据处理方法还包括步骤:
    在存储节点接收到文件读取或写入请求时,所述存储节点确定所述读取或写入请求对应的数据是否需要验证;
    在所述读取或写入请求对应的数据需要验证时,所述存储节点获取用户输入的验证信息;
    在所述验证信息与预存的验证信息匹配时,执行所述存储节点根据所述读取请求以及所述缓存区中的请求信息读取其存储区中的数据,或者根据所述写入请求以及所述缓存区中的请求信息向其存储区中写入数据的步骤。
  17. 如权利要求16所述的集群文件***的数据处理的方法,其特征在于,所述存储节点确定所述读取或写入请求对应的数据是否需要验证的步骤包括:
    检测所述读取或写入请求对应的文件的访问级别,以判断所述读取或写入请求对应的数据是否需要验证,其中,在所述文件为共享文件时,判定所述读取或写入请求对应的数据不需要验证,在所述文件为私有文件时,判定所述读取或写入请求对应的数据需要验证。
  18. 如权利要求14所述的集群文件***的数据处理方法,其特征在于,所述集群文件***的数据处理方法还包括:
    在所述存储节点为主存储节点时,所述存储节点检测其链路连接状态和所述存储节点的运行状态;
    在所述获取存储节点实时检测到其链路连接故障或所述存储节点运行故障时,所述存储节点在处于正常工作状态的从存储节点中选取主存储节点,将选取的所述从存储节点的地址标记为主存储节点地址;
    将标记的所述主存储节点地址发送至控制节点以及选取的所述从存储节点,其中,所述控制节点采用接收到的所述主存储节点地址更新保存的所述主存储节点地址,且选取的所述存储节点接收到所述主存储节点地址时,将工作状态切换为主存储节点状态。
  19. 如权利要求18所述的集群文件***的数据处理的方法,其特征在于,所述存储节点检测其链路连接状态和所述存储节点的运行状态的步骤包括:
    所述存储节点定时向所述控制节点发送第一检测数据包;
    接收所述控制节点基于所述检测数据包反馈的第二响应数据包:
    在预设时间间隔内接收到所述响应数据包时,判断所述存储节点的读取以及写入是否正常;
    在所述存储节点的读取以及写入正常时,判定所述存储节点运行正常,在所述存储节点的读取以及写入异常时,判定所述存储节点运行故障;
    在预设时间间隔内未接收到所述响应数据包时,判定所述存储节点的链路连接故障。
  20. 如权利要求14所述的集群文件***的数据处理的方法,其特征在于,所述集群文件***的数据处理的方法还包括:
    在所述存储节点初次启动时,所述存储节点获取控制节点上的配置参数并进行初始化操作,创建缓存区。
PCT/CN2016/105219 2015-11-26 2016-11-09 集群文件***的数据处理方法和装置 WO2017088664A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510847053.1 2015-11-26
CN201510847053.1A CN105511805B (zh) 2015-11-26 2015-11-26 集群文件***的数据处理方法和装置

Publications (1)

Publication Number Publication Date
WO2017088664A1 true WO2017088664A1 (zh) 2017-06-01

Family

ID=55719825

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/105219 WO2017088664A1 (zh) 2015-11-26 2016-11-09 集群文件***的数据处理方法和装置

Country Status (2)

Country Link
CN (1) CN105511805B (zh)
WO (1) WO2017088664A1 (zh)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107704201A (zh) * 2017-09-11 2018-02-16 厦门集微科技有限公司 数据存储处理方法及装置
CN109508317A (zh) * 2018-10-31 2019-03-22 武汉光谷联众大数据技术有限责任公司 一种大容量数据和服务管理***
CN109587221A (zh) * 2018-11-09 2019-04-05 平安科技(深圳)有限公司 大数据集群管理方法、装置、存储介质和计算机设备
CN110674095A (zh) * 2019-09-27 2020-01-10 浪潮电子信息产业股份有限公司 一种ctdb集群扩展方法、装置、设备及可读存储介质
CN110716692A (zh) * 2018-07-13 2020-01-21 浙江宇视科技有限公司 读取性能提升方法、装置、存储节点及数据读取方法
CN111314129A (zh) * 2020-02-13 2020-06-19 上海凯岸信息科技有限公司 一种基于文件式存储服务高可用架构
CN112395165A (zh) * 2020-11-27 2021-02-23 中电科技(北京)有限公司 获取服务器的资源利用率的方法、设备和服务器
CN112468330A (zh) * 2020-11-13 2021-03-09 苏州浪潮智能科技有限公司 一种故障节点的设置方法、***、设备以及介质
CN112671905A (zh) * 2020-12-23 2021-04-16 广州三七互娱科技有限公司 服务调度方法、装置及***
CN112737962A (zh) * 2020-12-24 2021-04-30 平安科技(深圳)有限公司 存储服务请求的处理方法、装置、计算机设备及存储介质
CN112988905A (zh) * 2021-04-27 2021-06-18 北京沃丰时代数据科技有限公司 用于集群部署的节点内存同步方法及装置
CN113590040A (zh) * 2021-07-29 2021-11-02 郑州阿帕斯数云信息科技有限公司 数据处理方法、装置、设备和存储介质
CN114189547A (zh) * 2022-02-14 2022-03-15 北京安盟信息技术股份有限公司 一种集群下ssl隧道快速切换方法
WO2023109381A1 (zh) * 2021-12-16 2023-06-22 中移(苏州)软件技术有限公司 一种信息处理方法及装置、存储介质

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105511805B (zh) * 2015-11-26 2019-03-19 深圳市中博科创信息技术有限公司 集群文件***的数据处理方法和装置
CN106254103B (zh) * 2016-07-28 2019-08-16 北京国电通网络技术有限公司 一种rtmp集群***可动态配置方法及装置
CN106598762B (zh) * 2016-12-29 2020-04-17 上海理想信息产业(集团)有限公司 一种消息同步方法
CN108512753B (zh) 2017-02-28 2020-09-29 华为技术有限公司 一种集群文件***中消息传输的方法及装置
CN107168649B (zh) * 2017-05-05 2019-12-17 南京城市职业学院 一种分布式存储***中数据分布的方法及装置
CN107707620B (zh) * 2017-08-30 2020-09-11 华为技术有限公司 处理io请求的方法及装置
CN109543204B (zh) * 2017-09-22 2022-09-13 南京理工大学 人体静电作用下半导体器件电热一体化分析方法
CN108023772B (zh) * 2017-12-07 2021-02-26 海能达通信股份有限公司 一种异常节点修复方法、装置及相关设备
CN110099084B (zh) * 2018-01-31 2021-06-15 北京易真学思教育科技有限公司 一种保证存储服务可用性的方法、***及计算机可读介质
CN108829720B (zh) * 2018-05-07 2022-01-14 麒麟合盛网络技术股份有限公司 数据处理方法及装置
CN109213507A (zh) * 2018-08-27 2019-01-15 郑州云海信息技术有限公司 一种升级方法及服务器
CN110474981A (zh) * 2019-08-13 2019-11-19 中科天御(苏州)科技有限公司 一种软件定义动态安全存储方法及装置
CN110868323B (zh) * 2019-11-15 2022-07-22 浪潮电子信息产业股份有限公司 一种带宽控制方法、装置、设备及介质
CN111371865B (zh) * 2020-02-26 2023-02-24 上海达梦数据库有限公司 一种客户端连接关系调整方法、***及节点
CN113051349A (zh) * 2021-04-02 2021-06-29 广东美电贝尔科技集团股份有限公司 一种执勤***数据同步方法
CN113259092A (zh) * 2021-04-04 2021-08-13 余绍祥 一种文档分布式加密***
CN116644039B (zh) * 2023-05-25 2023-12-19 安徽继远软件有限公司 一种基于大数据的在线能力运营日志自动采集分析的方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110055494A1 (en) * 2009-08-25 2011-03-03 Yahoo! Inc. Method for distributed direct object access storage
CN102035862A (zh) * 2009-09-30 2011-04-27 国际商业机器公司 Svc集群中配置节点的故障移交方法和***
US20130073717A1 (en) * 2011-09-15 2013-03-21 International Business Machines Corporation Optimizing clustered network attached storage (nas) usage
CN105511805A (zh) * 2015-11-26 2016-04-20 深圳市中博科创信息技术有限公司 集群文件***的数据处理方法和装置

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4643543B2 (ja) * 2006-11-10 2011-03-02 株式会社東芝 キャッシュ一貫性保証機能を有するストレージクラスタシステム
US8875146B2 (en) * 2011-08-01 2014-10-28 Honeywell International Inc. Systems and methods for bounding processing times on multiple processing units
CN103634269B (zh) * 2012-08-21 2017-04-19 ***股份有限公司 单点登录***及方法
US9189510B2 (en) * 2013-02-26 2015-11-17 Facebook, Inc. System and method for implementing cache consistent regional clusters
CN103207841B (zh) * 2013-03-06 2016-01-20 青岛海信传媒网络技术有限公司 基于键值对缓存的数据读写方法及装置
CN103207894A (zh) * 2013-03-14 2013-07-17 深圳市知正科技有限公司 一种多路实时视频数据存储***及其进行缓存控制的方法
CN104361030A (zh) * 2014-10-24 2015-02-18 西安未来国际信息股份有限公司 一种具有任务分发功能的分布式缓存架构及缓存方法
CN104580432A (zh) * 2014-12-23 2015-04-29 上海帝联信息科技股份有限公司 memcached***及内存缓存数据提供、维护和集群维护方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110055494A1 (en) * 2009-08-25 2011-03-03 Yahoo! Inc. Method for distributed direct object access storage
CN102035862A (zh) * 2009-09-30 2011-04-27 国际商业机器公司 Svc集群中配置节点的故障移交方法和***
US20130073717A1 (en) * 2011-09-15 2013-03-21 International Business Machines Corporation Optimizing clustered network attached storage (nas) usage
CN105511805A (zh) * 2015-11-26 2016-04-20 深圳市中博科创信息技术有限公司 集群文件***的数据处理方法和装置

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107704201A (zh) * 2017-09-11 2018-02-16 厦门集微科技有限公司 数据存储处理方法及装置
CN107704201B (zh) * 2017-09-11 2020-07-31 厦门集微科技有限公司 数据存储处理方法及装置
CN110716692B (zh) * 2018-07-13 2022-11-25 浙江宇视科技有限公司 读取性能提升方法、装置、存储节点及数据读取方法
CN110716692A (zh) * 2018-07-13 2020-01-21 浙江宇视科技有限公司 读取性能提升方法、装置、存储节点及数据读取方法
CN109508317A (zh) * 2018-10-31 2019-03-22 武汉光谷联众大数据技术有限责任公司 一种大容量数据和服务管理***
CN109508317B (zh) * 2018-10-31 2023-06-09 陕西合友网络科技有限公司 一种大容量数据和服务管理***
CN109587221A (zh) * 2018-11-09 2019-04-05 平安科技(深圳)有限公司 大数据集群管理方法、装置、存储介质和计算机设备
CN110674095A (zh) * 2019-09-27 2020-01-10 浪潮电子信息产业股份有限公司 一种ctdb集群扩展方法、装置、设备及可读存储介质
CN110674095B (zh) * 2019-09-27 2022-06-10 浪潮电子信息产业股份有限公司 一种ctdb集群扩展方法、装置、设备及可读存储介质
CN111314129A (zh) * 2020-02-13 2020-06-19 上海凯岸信息科技有限公司 一种基于文件式存储服务高可用架构
CN112468330A (zh) * 2020-11-13 2021-03-09 苏州浪潮智能科技有限公司 一种故障节点的设置方法、***、设备以及介质
CN112468330B (zh) * 2020-11-13 2022-12-06 苏州浪潮智能科技有限公司 一种故障节点的设置方法、***、设备以及介质
CN112395165A (zh) * 2020-11-27 2021-02-23 中电科技(北京)有限公司 获取服务器的资源利用率的方法、设备和服务器
CN112671905A (zh) * 2020-12-23 2021-04-16 广州三七互娱科技有限公司 服务调度方法、装置及***
CN112737962A (zh) * 2020-12-24 2021-04-30 平安科技(深圳)有限公司 存储服务请求的处理方法、装置、计算机设备及存储介质
CN112737962B (zh) * 2020-12-24 2023-06-02 平安科技(深圳)有限公司 存储服务请求的处理方法、装置、计算机设备及存储介质
CN112988905B (zh) * 2021-04-27 2021-08-10 北京沃丰时代数据科技有限公司 用于集群部署的节点内存同步方法及装置
CN112988905A (zh) * 2021-04-27 2021-06-18 北京沃丰时代数据科技有限公司 用于集群部署的节点内存同步方法及装置
CN113590040A (zh) * 2021-07-29 2021-11-02 郑州阿帕斯数云信息科技有限公司 数据处理方法、装置、设备和存储介质
CN113590040B (zh) * 2021-07-29 2024-03-19 郑州阿帕斯数云信息科技有限公司 数据处理方法、装置、设备和存储介质
WO2023109381A1 (zh) * 2021-12-16 2023-06-22 中移(苏州)软件技术有限公司 一种信息处理方法及装置、存储介质
CN114189547B (zh) * 2022-02-14 2022-05-03 北京安盟信息技术股份有限公司 一种集群下ssl隧道快速切换方法
CN114189547A (zh) * 2022-02-14 2022-03-15 北京安盟信息技术股份有限公司 一种集群下ssl隧道快速切换方法

Also Published As

Publication number Publication date
CN105511805A (zh) 2016-04-20
CN105511805B (zh) 2019-03-19

Similar Documents

Publication Publication Date Title
WO2017088664A1 (zh) 集群文件***的数据处理方法和装置
WO2016068622A1 (en) Terminal device and method of controlling same
WO2018233370A1 (zh) 镜像同步方法、***、设备及计算机可读存储介质
WO2013131444A1 (zh) 分享内容的方法、终端、服务器及***、计算机存储介质
WO2013017004A1 (zh) 文件的扫描方法、***、客户端及服务器
WO2019128174A1 (zh) 音频播放方法、智能电视及计算机可读存储介质
WO2015172684A1 (en) Ap connection method, terminal, and server
WO2019205272A1 (zh) 虚拟机服务提供方法、装置、设备及计算机可读存储介质
WO2018068411A1 (zh) 智能电视远程调试方法及智能电视远程调试***
WO2015157942A1 (zh) 接入无线网络的装置及方法
WO2017206883A1 (zh) 一种应用处理方法、装置、存储介质及电子设备
WO2018028121A1 (zh) 数据分区的存储空间管理方法及装置
WO2018076864A1 (zh) 一种数据同步方法、装置、存储介质及电子设备
WO2018014567A1 (zh) 一种提高虚拟机性能的方法、终端、设备及计算机可读存储介质
WO2018076811A1 (zh) 数据分享方法、装置、存储介质及电子设备
WO2016058258A1 (zh) 终端远程控制方法和***
WO2018053963A1 (zh) 智能电视的***升级方法及装置
WO2018120680A1 (zh) 虚拟磁盘备份***、方法、装置、服务主机和存储介质
WO2018076875A1 (zh) 备份数据的同步方法、装置、存储介质、电子设备及服务器
WO2021241849A1 (ko) 에지 컴퓨팅 서비스를 수행하는 전자 장치 및 전자 장치의 동작 방법
WO2016000560A1 (en) File transmission method, file transmission apparatus, and file transmission system
WO2018121026A1 (zh) 一种机顶盒配置方法及***
WO2018076870A1 (zh) 数据处理方法、装置、存储介质、服务器及数据处理***
WO2017024805A1 (zh) 一种文件分发方法、装置和***
WO2018076842A1 (zh) 一种数据备份方法、装置、***、存储介质及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16867888

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16867888

Country of ref document: EP

Kind code of ref document: A1