US20200241781A1 - Method and system for inline deduplication using erasure coding - Google Patents

Method and system for inline deduplication using erasure coding Download PDF

Info

Publication number
US20200241781A1
US20200241781A1 US16/260,734 US201916260734A US2020241781A1 US 20200241781 A1 US20200241781 A1 US 20200241781A1 US 201916260734 A US201916260734 A US 201916260734A US 2020241781 A1 US2020241781 A1 US 2020241781A1
Authority
US
United States
Prior art keywords
data
data chunks
deduplicated
chunks
chunk
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/260,734
Inventor
Dharmesh M. Patel
Rizwan Ali
Ravikanth Chaganti
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dell Products LP
Original Assignee
Dell Products LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dell Products LP filed Critical Dell Products LP
Priority to US16/260,734 priority Critical patent/US20200241781A1/en
Assigned to DELL PRODUCTS L.P. reassignment DELL PRODUCTS L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ALI, RIZWAN, CHAGANTI, RAVIKANTH, PATEL, DHARMESH M.
Assigned to THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A. reassignment THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A. SECURITY AGREEMENT Assignors: CREDANT TECHNOLOGIES, INC., DELL INTERNATIONAL L.L.C., DELL MARKETING L.P., DELL PRODUCTS L.P., DELL USA L.P., EMC CORPORATION, EMC IP Holding Company LLC, FORCE10 NETWORKS, INC., WYSE TECHNOLOGY L.L.C.
Assigned to THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A. reassignment THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A. SECURITY AGREEMENT Assignors: CREDANT TECHNOLOGIES INC., DELL INTERNATIONAL L.L.C., DELL MARKETING L.P., DELL PRODUCTS L.P., DELL USA L.P., EMC CORPORATION, EMC IP Holding Company LLC, FORCE10 NETWORKS, INC., WYSE TECHNOLOGY L.L.C.
Publication of US20200241781A1 publication Critical patent/US20200241781A1/en
Priority to US17/100,178 priority patent/US11281389B2/en
Assigned to DELL PRODUCTS L.P. reassignment DELL PRODUCTS L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ALI, RIZWAN, CHAGANTI, RAVIKANTH, PATEL, DHARMESH M.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0608Saving storage space on storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1076Parity data used in redundant arrays of independent storages, e.g. in RAID systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • G06F3/0641De-duplication techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/065Replication mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/373Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35 with erasure correction and erasure determination, e.g. for packet loss recovery or setting of erasures for the decoding of Reed-Solomon codes

Definitions

  • Computing devices may include any number of internal components such as processors, memory, and persistent storage. Each of the internal components of a computing device may be used to generate data. The process of generating, storing, and backing-up data may utilize computing resources of the computing devices such as processing and storage. The utilization of the aforementioned computing resources to generate backups may impact the overall performance of the computing resources.
  • the invention in general, in one aspect, relates to a method for storing data in accordance with one or more embodiments of the invention.
  • the method includes obtaining data, applying an erasure coding procedure to the data to obtain a plurality of data chunks and a parity chunk, deduplicating the plurality of data chunks to obtain a plurality of deduplicated data chunks, storing, across a plurality of nodes, the plurality of deduplicated data chunks and the parity chunk, and tracking location information for each of the plurality of deduplicated data chunks and the parity chunk.
  • the invention relates to a non-transitory computer readable medium in accordance with one or more embodiments of the invention includes computer readable program code, which when executed by a computer processor enables the computer processor to perform a method for storing data.
  • the method includes obtaining data, applying an erasure coding procedure to the data to obtain a plurality of data chunks and a parity chunk, deduplicating the plurality of data chunks to obtain a plurality of deduplicated data chunks, storing, across a plurality of nodes, the plurality of deduplicated data chunks and the parity chunk, and tracking location information for each of the plurality of deduplicated data chunks and the parity chunk.
  • the invention relates to a data cluster.
  • the data cluster includes a plurality of data nodes comprising an accelerator pool and a non-accelerator pool, wherein the accelerator pool comprises a data node, and the non-accelerator pool comprises a plurality of data nodes; wherein the data node of the plurality node is programmed to: obtain data, apply an erasure coding procedure to the data to obtain a plurality of data chunks and a parity chunk, deduplicate the plurality of data chunks to obtain a plurality of deduplicated data chunks, store, across a plurality of nodes, the plurality of deduplicated data chunks and the parity chunk, and track location information for each of the plurality of deduplicated data chunks and the parity chunk.
  • FIG. 1A shows a diagram of a system in accordance with one or more embodiments of the invention.
  • FIG. 1B shows a diagram of a data cluster in accordance with one or more embodiments of the invention.
  • FIG. 2 shows a flowchart for storing data in a data cluster in accordance with one or more embodiments of the invention.
  • FIGS. 3A-3C show an example in accordance with one or more embodiments of the invention.
  • FIG. 4 shows a diagram of a computing device in accordance with one or more embodiments of the invention.
  • any component described with regard to a figure in various embodiments of the invention, may be equivalent to one or more like-named components described with regard to any other figure.
  • descriptions of these components will not be repeated with regard to each figure.
  • each and every embodiment of the components of each figure is incorporated by reference and assumed to be optionally present within every other figure having one or more like-named components.
  • any description of the components of a figure is to be interpreted as an optional embodiment, which may be implemented in addition to, in conjunction with, or in place of the embodiments described with regard to a corresponding like-named component in any other figure.
  • a data structure may include a first element labeled as A and a second element labeled as N.
  • This labeling convention means that the data structure may include any number of the elements.
  • a second data structure also labeled as A to N, may also include any number of elements. The number of elements of the first data structure and the number of elements of the second data structure may be the same or different.
  • Embodiments of the invention relate to a method and system for storing data in a data cluster.
  • Embodiments of the invention may utilize a deduplicator, operating in an accelerator pool, which applies an erasure coding procedure on data obtained from a host to divide the data into data chunks and to generate parity chunks using the data chunks.
  • the deduplicator may then perform deduplication on the data chunks to generate deduplicated data that includes deduplicated data chunks.
  • the deduplicated data chunks and the parity chunks are subsequently distributed to nodes in the data cluster in accordance with an erasure coding procedure.
  • the deduplicator stores storage information that specifies the nodes in which each data chunk and parity chunk is stored. In this manner, if the accelerator pool obtains data that include modifications to previously stored data chunks, the modified data chunks may be sent to the appropriate nodes (i.e., the nodes on which prior versions of the specific data chunk or parity chunk are stored). In this manner, embodiments of the invention minimize the number of read and write operations that are required to write erasure coded deduplicated data to the non-accelerator pool.
  • one or more embodiments of the invention enable only portions of a stripe (i.e., a set of data chunks and parity chunks) to be written to the non-accelerator pool when a portion of the stripe is modified. These results in fewer read and write operations being performed as none of the prior stored data chunks need to be read from or re-written to the non-accelerator pool.
  • FIG. 1A shows an example system in accordance with one or more embodiments of the invention.
  • the system includes a host ( 100 ) and a data cluster ( 110 ).
  • the host ( 100 ) is operably connected to the data cluster ( 110 ) via any combination of wired and/or wireless connections.
  • the host ( 100 ) utilizes the data cluster ( 110 ) to store data.
  • the data stored may be backups of databases, files, applications, and/or other types of data without departing from the invention.
  • the host ( 100 ) is implemented as a computing device (see e.g., FIG. 4 ).
  • the computing device may be, for example, a laptop computer, a desktop computer, a server, a distributed computing system, or a cloud resource (e.g., a third-party storage system accessible via a wired or wireless connection).
  • the computing device may include one or more processors, memory (e.g., random access memory), and persistent storage (e.g., disk drives, solid state drives, etc.).
  • the computing device may include instructions, stored on the persistent storage, that when executed by the processor(s) of the computing device cause the computing device to perform the functionality of the host ( 100 ) described throughout this application.
  • the host ( 100 ) is implemented as a logical device.
  • the logical device may utilize the computing resources of any number of computing devices and thereby provide the functionality of the host ( 100 ) described throughout this application.
  • the data cluster ( 110 ) stores data and/or backups of data generated by the host ( 100 ).
  • the data and/or backups may be deduplicated versions of data obtained from the host.
  • the data cluster may, via an erasure coding procedure, store portions of the deduplicated data across the nodes operating in the data cluster ( 110 ).
  • deduplication refers to methods of storing only portions of files (also referred to as file segments or segments) that are not already stored in persistent storage. For example, when multiple versions of a large file, having only minimal differences between each of the versions, are stored without deduplication, storing each version will require approximately the same amount of storage space of a persistent storage. In contrast, when the multiple versions of the large file are stored with deduplication, only the first version of the multiple versions stored will require a substantial amount of storage.
  • the subsequent versions of the large file subsequently stored will be de-duplicated before being stored in the persistent storage resulting in much less storage space of the persistent storage being required to store the subsequently stored versions when compared to the amount of storage space of the persistent storage required to store the first stored version.
  • the data cluster ( 110 ) may include nodes that each store any number of portions of data.
  • the portions of data may be obtained by other nodes or obtained from the host ( 100 ).
  • FIG. 1B For additional details regarding the data cluster ( 110 ), see, e.g., FIG. 1B .
  • FIG. 1B shows a diagram of a data cluster ( 120 ) in accordance with one or more embodiments of the invention.
  • the data cluster ( 120 ) may be an embodiment of the data cluster ( 110 , FIG. 1A ) discussed above.
  • the data cluster ( 120 ) may include an accelerator pool ( 130 ) and a non-accelerator pool ( 150 ).
  • the accelerator pool ( 130 ) may include a deduplicator(s) ( 132 ) and any number of data nodes ( 134 , 136 ).
  • the non-accelerator pool ( 150 ) includes any number of data nodes ( 154 , 156 ).
  • the components of the data cluster ( 120 ) may be operably connected via any combination of wired and/or wireless connections. Each of the aforementioned components is discussed below.
  • the deduplicator(s) ( 132 ) is a device that includes functionality to perform deduplication on data obtained from a host (e.g., 100 , FIG. 1A ).
  • the deduplicator ( 132 ) may store information useful to perform the aforementioned functionality.
  • the information may include deduplication identifiers (D-IDs).
  • D-ID is a unique identifier that identifies portions of the data (also referred to as data chunks) that are stored in the data cluster ( 120 ).
  • the D-ID may be used to determine whether a data chunk of the obtained data is already present elsewhere in the accelerator pool ( 140 ) or the non-accelerator pool ( 150 ).
  • the deduplicator ( 132 ) may use the information to perform the deduplication and generate deduplicated data (or a deduplicated backup). After deduplication, an erasure coding procedure may be performed on the deduplicated data in order to generate parity chunks. The deduplicator ( 132 ) may perform the deduplication and erasure coding procedure via the method illustrated in FIG. 2
  • the deduplicator ( 132 ) is implemented as computer instructions, e.g., computer code, stored on a persistent storage that when executed by a processor of a data node (e.g., 134 , 136 ) of the accelerator pool ( 140 ) cause the data node to provide the aforementioned functionality of the deduplicator ( 132 ) described throughout this application and/or all, or a portion thereof, of the method illustrated in FIG. 2 .
  • a data node e.g., 134 , 136
  • the accelerator pool ( 140 ) causes the data node to provide the aforementioned functionality of the deduplicator ( 132 ) described throughout this application and/or all, or a portion thereof, of the method illustrated in FIG. 2 .
  • the deduplicator ( 132 ) is implemented as a computing device (see e.g., FIG. 4 ).
  • the computing device may be, for example, a laptop computer, a desktop computer, a server, a distributed computing system, or a cloud resource (e.g., a third-party storage system accessible via a wired or wireless connection).
  • the computing device may include one or more processors, memory (e.g., random access memory), and persistent storage (e.g., disk drives, solid state drives, etc.).
  • the computing device may include instructions, stored on the persistent storage, that when executed by the processor(s) of the computing device cause the computing device to perform the functionality of the deduplicator ( 132 ) described throughout this application and/or all, or a portion thereof, of the method illustrated in FIG. 2 .
  • the deduplicator ( 132 ) is implemented as a logical device.
  • the logical device may utilize the computing resources of any number of computing devices and thereby provide the functionality of the deduplicator ( 132 ) described throughout this application and/or all, or a portion thereof, of the method illustrated in FIG. 2 .
  • different data nodes in the cluster may include different quantities and/or types of computing resources, e.g., processors providing processing resources, memory providing memory resources, storages providing storage resources, communicators providing communications resources.
  • the system may include a heterogeneous population of nodes.
  • the heterogeneous population of nodes may be logically divided into an accelerator pool ( 130 ) including nodes that have more computing resources, e.g., high performance nodes ( 134 , 136 ) than other nodes and a non-accelerator pool ( 150 ) including nodes that have fewer computing resources, e.g., low performance nodes ( 154 , 156 ) than the nodes in the accelerator pool ( 130 ).
  • nodes of the accelerator pool ( 130 ) may include enterprise class solid state storage resources that provide very high storage bandwidth, low latency, and high input-outputs per second (IOPS).
  • the nodes of the non-accelerator pool ( 150 ) may include hard disk drives that provide lower storage performance. While illustrated in FIG. 1B as being divided into two groups, the nodes may be divided into any number of groupings based on the relative performance level of each node without departing from the invention.
  • the data nodes ( 134 , 136 , 154 , 156 ) store data chunks and parity chunks.
  • the data nodes ( 134 , 136 , 154 , 156 ) may include persistent storage that may be used to store the data chunks and parity chunks. The generation of the data chunks and parity chunks is described below with respect to FIG. 2 .
  • the non-accelerator pool ( 150 ) includes any number of fault domains.
  • a fault domain is a logical grouping of nodes (e.g., data nodes) that, when one node of the logical grouping of nodes goes offline and/or otherwise becomes inaccessible, the other nodes in the logical grouping of nodes are directly affected. The effect of the node going offline to the other nodes may include the other nodes also going offline and/or otherwise inaccessible.
  • the non-accelerator pool ( 150 ) may include multiple fault domains. In this manner, the events of one fault domain in the non-accelerator pool ( 150 ) may have no effect to other fault domains in the non-accelerator pool ( 150 ).
  • two data nodes may be in a first fault domain. If one of these data nodes in the first fault domain experiences an unexpected shutdown, other nodes in the first fault domain may be affected. In contrast, another data node in the second fault domain may not be affected by the unexpected shutdown of a data node in the first fault domain. In one or more embodiments of the invention, the unexpected shutdown of one fault domain does not affect the nodes of other fault domains. In this manner, data may be replicated and stored across multiple fault domains to allow high availability of the data.
  • each data node ( 134 , 136 , 154 , 156 ) is implemented as a computing device (see e.g., FIG. 4 ).
  • the computing device may be, for example, a laptop computer, a desktop computer, a server, a distributed computing system, or a cloud resource (e.g., a third-party storage system accessible via a wired or wireless connection).
  • the computing device may include one or more processors, memory (e.g., random access memory), and persistent storage (e.g., disk drives, solid state drives, etc.).
  • the computing device may include instructions, stored on the persistent storage, that when executed by the processor(s) of the computing device cause the computing device to perform the functionality of the data node ( 134 , 136 , 154 , 156 ) described throughout this application and/or all, or a portion thereof, of the method illustrated in FIG. 2 .
  • the data nodes are implemented as a logical device.
  • the logical device may utilize the computing resources of any number of computing devices and thereby provide the functionality of the data nodes ( 134 , 136 , 154 , 156 ) described throughout this application and/or all, or a portion thereof, of the method illustrated in FIG. 2 .
  • FIG. 2 shows a flowchart for storing data in a data cluster in accordance with one or more embodiments of the invention.
  • the method shown in FIG. 2 may be performed by, for example, a deduplicator ( 132 , FIG. 1B ).
  • Other components of the system illustrated in FIG. 1B may perform the method of FIG. 2 without departing from the invention. While the various steps in the flowchart are presented and described sequentially, one of ordinary skill in the relevant art will appreciate that some or all of the steps may be executed in different orders, may be combined or omitted, and some or all steps may be executed in parallel.
  • step 200 data is obtained from a host.
  • the data may be a file, a file segment, a collection of files, or any other type of data without departing from the invention.
  • the data may be obtained in response to a request to store data and/or backup the data. Other requests may be used to initiate the method without departing from the invention.
  • step 202 confirmation is sent to the host.
  • the confirmation is an acknowledgement (ACK) that confirms receipt of the data by the data cluster.
  • ACK acknowledgement
  • an erasure coding procedure is performed on the data to generate data chunks and parity chunks.
  • the erasure coding procedure includes dividing the obtained data into portions, referred to as data chunks. Each data chunk may include any number of data segments associated with the obtained data. The individual data chunks may then be combined (or otherwise grouped) into stripes (also referred to as Redundant Array of Independent Disks (RAID) stripes). One or more parity values are then calculated for each of the aforementioned stripes. The number of parity stripes may vary based on the erasure coding algorithm that is being used as part of the erasure coding procedure.
  • Non-limiting examples of erasure coding algorithms are RAID-4, RAID-5, and RAID-6. Other erasing coding algorithms may be used without departing from the invention.
  • erasing code procedure is implementing RAID 4, then a single parity value is calculated. The resulting parity value is then stored in a parity chunk. If erasure coding procedure algorithm requires multiple parity values to be calculated, then the multiple parity values are calculated with each parity value being stored in a separate data chunk.
  • the data chunks are used to generate parity chunks in accordance with the erasure coding procedure. More specifically, the parity chunks may be generated by applying a predetermined function (e.g., P Parity function, Q Parity Function), operation, or calculation to at least one of the data chunks. Depending on the erasure coding procedure used, the parity chunks may include, but are not limited to, P parity values and/or Q parity values.
  • a predetermined function e.g., P Parity function, Q Parity Function
  • the parity chunks may include, but are not limited to, P parity values and/or Q parity values.
  • the P parity value is a Reed-Solomon syndrome and, as such, the P Parity function may correspond to any function that can generate a Reed-Solomon syndrome. In one embodiment of the invention, the P parity function is an XOR function.
  • the Q parity value is a Reed-Solomon syndrome and, as such, the Q Parity function may correspond to any function that can generate a Reed-Solomon syndrome.
  • a Q parity value is a Reed-Solomon code.
  • Q g 0 ⁇ D 0 +g 1 ⁇ D 1 +g 2 D 2 + . . . +g n ⁇ 1 ⁇ D n ⁇ 1 , where Q corresponds to the Q parity, g is a generator of the field, and the value of D corresponds to the data in the data chunks.
  • the number of data chunks and parity chunks generated is determined by the erasure coding procedure, which may be specified by the host, by the data cluster, and/or by another entity.
  • deduplication is performed on the data chunks to obtain deduplicated data chunks.
  • the deduplication is performed in the accelerator pool by identifying the data chunks of the obtained data and assigning a fingerprint to each data chunk.
  • a fingerprint is a unique identifier (e.g., a D-ID) that may be stored in metadata of the data chunk.
  • the deduplicator performing the deduplication may generate a fingerprint for a data chunk and identify whether the fingerprint matches an existing fingerprint stored in the deduplicator. If the fingerprint matches an existing fingerprint, the data chunk may be deleted, as it is already stored in the data cluster. If the fingerprint does not match any existing fingerprints, the data chunk may be stored as a deduplicated data chunk. Additionally, the fingerprint is stored in the deduplicator for deduplication purposes of future obtained data.
  • the deduplicated data chunks collectively make up the deduplicated data. In one or more embodiments of the invention, the deduplicated data chunks are the data chunks that were not deleted during deduplication.
  • the deduplicated data chunks and parity chunks are stored across data nodes in different fault domains in a non-accelerator pool.
  • the deduplicated data chunks and the parity chunks are stored in a manner that minimizes reads and writes from the non-accelerator pool. In one embodiment of the invention, this minimization is achieved by storing data chunks and parity chunks, which are collectively referred to as a stripe, in the same manner as a prior version of the stripe.
  • the deduplicator may use, as appropriate, location information for the previously stored data chunks and parity chunks to determine where to store the data chunks and parity chunks in step 208 .
  • the deduplicated data chunks and parity chunks may be stored across the nodes (each in a different fault domain) in the non-accelerator pool.
  • the location (or in this case the specific node) in which the data chunk or parity chunk is stored is tracked by the deduplicator. The scenario does not require the deduplicator to use location information for previously stored data chunks and parity chunks.
  • the deduplicated data chunks and parity chunks are the second version of a stripe (e.g., a modification to a previously stored stripe)
  • the deduplicated data chunks and parity chunks are stored across the nodes (each in a different fault domain) in the non-accelerator pool using prior stored location information.
  • the location (or in this case the specific node and/or fault domain) in which the data chunk or parity chunk is stored is tracked by the deduplicator.
  • the first version of the stripe includes three data chunks (D1, D2, D3) and one parity chunk (P1) and that they were stored as follows: Node 1 stores D1, Node 2 stores D2, Node 3 stores D3, and Node 4 stores P1.
  • a second version of the stripe is received that includes three data chunks (D1, D2′, D3) and one newly calculated parity chunk (P1′).
  • D2′ is stored on Node 2
  • P1′ is stored on Node 4.
  • the data chunks and parity chunks associated with the second stripe satisfy the condition that all data chunks and parity chunks for the second version of the stripe are being stored in separate fault domains. If the location information was not taken into account, then the entire stripe (i.e., D1, D2′, D3, and P1′) would need to be stored in order to guarantee that the requirement that all data chunks and parity chunks for the second version of the stripe are being stored in separate fault domains is satisfied.
  • the data node may: (i) store the modified version of the deduplicated data chunk (i.e., the data node would include two versions of the data chunk) or (ii) store the modified version of the deduplicated data chunk and delete the prior version of the deduplicated data chunk.
  • the deduplicator includes functionality to determine whether a given data chunk is a modified version of a previously stored data chunk. Said another way, after the data is received from a host divided into data chunks and grouped into stripes, the deduplicator includes functionality to determine whether a stripe is a modified version of a prior stored stripe. The deduplicator may use the fingerprints of the data chunks within the stripe to determine whether the stripe is a modified version of a prior stored stripe. Other methods for determining whether a data chunk is a modified version of a prior stored data chunk and/or whether a stripe is a modified version of a prior stripe without departing from the invention.
  • location information in the deduplicator is updated using the location of the deduplicated data chunks and parity chunks.
  • the location (or location) may be specified using a node identifier, a fault domain identifier (i.e., the fault domain in which the node storing the data chunk or parity chunk is located), or any other type of identifying information.
  • the location information may be stored along with other chunk metadata, which may include, but is not limited to, a chunk type (e.g., data chunk or parity chunk), a deduplicated data chunk identifier (e.g., a D-ID) or parity chunk identifier (which may be generated for a parity chunk in the same manner as a D-ID for a data chunk), and the erasure coding information (e.g., information about the erasure code procedure, e.g., the erasure coding algorithm)
  • a chunk type e.g., data chunk or parity chunk
  • a deduplicated data chunk identifier e.g., a D-ID
  • parity chunk identifier which may be generated for a parity chunk in the same manner as a D-ID for a data chunk
  • the erasure coding information e.g., information about the erasure code procedure, e.g., the erasure coding algorithm
  • the data chunks and parity chunks may be stored in different fault domains. Storing the data chunks and parity chunks in multiple fault domains may be for recovery purposes. In the event that one or more fault domains storing data chunks or parity chunks become inaccessible, the data chunks and/or parity chunks stored in the remaining fault domains may be used to recreate the inaccessible data.
  • the deduplicator or other computing device or logical device tracks the members of each stripe (i.e., which data chunks and which parity chunks are part of a stripe). This information may be used to aid in any recover operation that is required to be performed on the data stored in the data cluster.
  • the data that is originally obtained in step 200 and/or the deduplicated chunks obtained in step 206 may be: (i) stored on a node in the accelerator pool for a finite period of time (e.g., until it is determined that this data is no longer required in the accelerator pool, where this determination may be made based on a policy); (ii) stored on a node in the accelerator pool until the end of the step 208 and then deleted from the accelerator pool.
  • FIGS. 3A-3C show a diagram of the two backups at the two points in time.
  • Backup A ( 300 ) includes data that may be divided into data chunks A0 ( 302 ), A1 ( 304 ), and A2 ( 306 ).
  • the data cluster obtains a second backup ( 310 ) that includes data that may be divided into data chunks A0 ( 312 ), A1′ ( 314 ), and A3 ( 316 ).
  • Backup B is a modified version of Backup A. Accordingly, assume that the data associated with data chunk A0 ( 312 ) of backup B ( 310 ) is identical to the data associated with data chunk A0 ( 302 ) of backup A ( 300 ). Similarly, the data associated with data chunk A2 ( 316 ) of backup B ( 310 ) is identical to the data associated with data chunk A2 ( 306 ) of backup A ( 300 ). In contrast, the data associated with data chunk A1′ ( 314 ) of backup B ( 310 ) is an update of data chunk A1 ( 304 ) of backup A ( 300 ). Finally, in this example, assume that the erasure coding process includes implementing RAID 4.
  • FIG. 3B shows the data cluster after backup A ( 300 ) is processed in accordance with FIG. 2 .
  • the data cluster may include an accelerator pool ( 320 ) that performs the method of FIG. 2 to generate deduplicated backup A (322) using backup A ( 300 ).
  • the method may include dividing the backup into data chunks A0, A1, and A2, where these data chunks are associated with a first stripe. The aforementioned data chunks are then used to generate a parity chunk AP1 using RAID 3.
  • deduplicated backup A ( 322 ) is the first backup stored in the data cluster, all three data chunks are distributed across nodes in the non-accelerator pool ( 330 ) as deduplicated data chunks ( 322 A, 322 B, 322 C).
  • Deduplicated data chunk A0 ( 322 A) may be stored in a node A ( 332 )
  • deduplicated data chunk A1 ( 322 B) may be stored in a node B ( 334 )
  • deduplicated data chunk A2 ( 322 C) may be stored in a node C ( 336 )
  • parity chunk AP1 ( 322 D) may be stored in a node D ( 338 ).
  • Each node ( 332 , 334 , 336 , 338 ) may be a node in a unique fault domain. In this manner, each chunk ( 322 A, 322 B, 322 C, 322 D) is stored in a different fault domain.
  • each deduplicated data chunk ( 322 A, 322 B, 322 C) and parity chunk ( 322 D) is stored in the deduplicator of the accelerator pool ( 320 ) as location information.
  • the location information may include entries that each specify a deduplicated data chunk ( 322 A, 322 B, 322 C) or the parity chunk AP1 ( 322 D) and the data node ( 332 , 334 , 336 , 338 ) storing the respective chunk.
  • backup B ( 310 ) is obtained by the accelerator pool ( 320 ).
  • the backup B ( 310 ) may be divided into data chunks A0, A1′, and A2, where these data chunks are associated with a second stripe that is a modified version of the first stripe.
  • the data chunks (A0, A1′, A2) may be used to generate a parity chunk AP1′.
  • the data chunks in the second stripe are then deduplicated by the deduplicator.
  • the result of the deduplication of the second stripe is that data chunks A0 and A2 exist in the non-accelerator pool and thus are deleted from the backup B.
  • the remaining chunks associated with the deduplicated backup B ( 324 ) may be stored in nodes of the non-accelerator pool ( 330 ) as deduplicated data chunks A1′ ( 324 A) and AP1′ ( 324 B).
  • the accelerator pool ( 320 ) may use the location information, which specifies the location information of deduplicated data chunks ( 322 A, 322 B, 322 C) and parity chunk ( 322 D) of deduplicated backup A ( 322 ), to determine where to store the deduplicated data chunk ( 324 A) and parity chunk ( 324 B) of deduplicated backup B ( 324 ).
  • deduplicated data chunk A1′ ( 324 A) is stored in node B ( 334 ), where deduplicated data chunk A1 ( 322 B) is stored. Subsequently, deduplicated data chunk A1 ( 322 B) may be deleted from node B ( 334 ). Similarly, parity chunk AP1′ ( 324 B) is stored in node D ( 338 ). Further, parity chunk AP1 ( 322 D) may be deleted from node D ( 338 ).
  • FIG. 4 shows a diagram of a computing device in accordance with one or more embodiments of the invention.
  • the computing device ( 400 ) may include one or more computer processors ( 402 ), non-persistent storage ( 404 ) (e.g., volatile memory, such as random access memory (RAM), cache memory), persistent storage ( 406 ) (e.g., a hard disk, an optical drive such as a compact disk (CD) drive or digital versatile disk (DVD) drive, a flash memory, etc.), a communication interface ( 412 ) (e.g., Bluetooth interface, infrared interface, network interface, optical interface, etc.), input devices ( 410 ), output devices ( 408 ), and numerous other elements (not shown) and functionalities. Each of these components is described below.
  • non-persistent storage 404
  • persistent storage e.g., a hard disk, an optical drive such as a compact disk (CD) drive or digital versatile disk (DVD) drive, a flash memory, etc.
  • the computer processor(s) ( 402 ) may be an integrated circuit for processing instructions.
  • the computer processor(s) may be one or more cores or micro-cores of a processor.
  • the computing device ( 400 ) may also include one or more input devices ( 410 ), such as a touchscreen, keyboard, mouse, microphone, touchpad, electronic pen, or any other type of input device.
  • the communication interface ( 412 ) may include an integrated circuit for connecting the computing device ( 400 ) to a network (not shown) (e.g., a local area network (LAN), a wide area network (WAN) such as the Internet, mobile network, or any other type of network) and/or to another device, such as another computing device.
  • a network not shown
  • LAN local area network
  • WAN wide area network
  • the computing device ( 400 ) may include one or more output devices ( 408 ), such as a screen (e.g., a liquid crystal display (LCD), a plasma display, touchscreen, cathode ray tube (CRT) monitor, projector, or other display device), a printer, external storage, or any other output device.
  • a screen e.g., a liquid crystal display (LCD), a plasma display, touchscreen, cathode ray tube (CRT) monitor, projector, or other display device
  • One or more of the output devices may be the same or different from the input device(s).
  • the input and output device(s) may be locally or remotely connected to the computer processor(s) ( 402 ), non-persistent storage ( 404 ), and persistent storage ( 406 ).
  • the computer processor(s) 402
  • non-persistent storage 404
  • persistent storage 406
  • One or more embodiments of the invention may be implemented using instructions executed by one or more processors of the data management device. Further, such instructions may correspond to computer readable instructions that are stored on one or more non-transitory computer readable mediums.
  • One or more embodiments of the invention may improve the operation of one or more computing devices. More specifically, embodiments of the invention improve the efficiency of performing storage operations in a data cluster. The efficiency is improved by implementing erasure coding procedures and performing deduplication on data.
  • the erasure coding procedure includes generating additional portions of data associated with the data.
  • the deduplicated data and the additional portions of data may be stored across multiple fault domains. In this manner, if any number of fault domains become inaccessible prior to recovery of data, the data stored in the remaining fault domains may be used to recreate the data. This method may replace the need to store multiple copies of the same data across the fault domains, thus reducing the amount of storage used for storing data while maintaining policies in the event of fault domain failures.
  • embodiments of the invention improve the storage and recovery operations by tracking the location of each portion of data (e.g., data chunks and parity chunks) stored in the data cluster. By monitoring tracking the location, embodiments of the invention may be used to send deduplicated data chunks and/or parity chunks to appropriate data nodes.
  • data e.g., data chunks and parity chunks
  • embodiments of the invention may address the problem of inefficient use of computing resources. This problem arises due to the technological nature of the environment in which data storage operations are performed.

Abstract

A method for storing data includes obtaining data, applying an erasure coding procedure to the data to obtain a plurality of data chunks and a parity chunk, deduplicating the plurality of data chunks to obtain a plurality of deduplicated data chunks, storing, across a plurality of nodes, the plurality of deduplicated data chunks and the parity chunk, and tracking location information for each of the plurality of deduplicated data chunks and the parity chunk.

Description

    BACKGROUND
  • Computing devices may include any number of internal components such as processors, memory, and persistent storage. Each of the internal components of a computing device may be used to generate data. The process of generating, storing, and backing-up data may utilize computing resources of the computing devices such as processing and storage. The utilization of the aforementioned computing resources to generate backups may impact the overall performance of the computing resources.
  • SUMMARY
  • In general, in one aspect, the invention relates to a method for storing data in accordance with one or more embodiments of the invention. The method includes obtaining data, applying an erasure coding procedure to the data to obtain a plurality of data chunks and a parity chunk, deduplicating the plurality of data chunks to obtain a plurality of deduplicated data chunks, storing, across a plurality of nodes, the plurality of deduplicated data chunks and the parity chunk, and tracking location information for each of the plurality of deduplicated data chunks and the parity chunk.
  • In general, in one aspect, the invention relates to a non-transitory computer readable medium in accordance with one or more embodiments of the invention includes computer readable program code, which when executed by a computer processor enables the computer processor to perform a method for storing data. The method includes obtaining data, applying an erasure coding procedure to the data to obtain a plurality of data chunks and a parity chunk, deduplicating the plurality of data chunks to obtain a plurality of deduplicated data chunks, storing, across a plurality of nodes, the plurality of deduplicated data chunks and the parity chunk, and tracking location information for each of the plurality of deduplicated data chunks and the parity chunk.
  • In general, in one aspect, the invention relates to a data cluster. The data cluster includes a plurality of data nodes comprising an accelerator pool and a non-accelerator pool, wherein the accelerator pool comprises a data node, and the non-accelerator pool comprises a plurality of data nodes; wherein the data node of the plurality node is programmed to: obtain data, apply an erasure coding procedure to the data to obtain a plurality of data chunks and a parity chunk, deduplicate the plurality of data chunks to obtain a plurality of deduplicated data chunks, store, across a plurality of nodes, the plurality of deduplicated data chunks and the parity chunk, and track location information for each of the plurality of deduplicated data chunks and the parity chunk.
  • BRIEF DESCRIPTION OF DRAWINGS
  • Certain embodiments of the invention will be described with reference to the accompanying drawings. However, the accompanying drawings illustrate only certain aspects or implementations of the invention by way of example and are not meant to limit the scope of the claims.
  • FIG. 1A shows a diagram of a system in accordance with one or more embodiments of the invention.
  • FIG. 1B shows a diagram of a data cluster in accordance with one or more embodiments of the invention.
  • FIG. 2 shows a flowchart for storing data in a data cluster in accordance with one or more embodiments of the invention.
  • FIGS. 3A-3C show an example in accordance with one or more embodiments of the invention.
  • FIG. 4 shows a diagram of a computing device in accordance with one or more embodiments of the invention.
  • DETAILED DESCRIPTION
  • Specific embodiments will now be described with reference to the accompanying figures. In the following description, numerous details are set forth as examples of the invention. It will be understood by those skilled in the art that one or more embodiments of the present invention may be practiced without these specific details and that numerous variations or modifications may be possible without departing from the scope of the invention. Certain details known to those of ordinary skill in the art are omitted to avoid obscuring the description.
  • In the following description of the figures, any component described with regard to a figure, in various embodiments of the invention, may be equivalent to one or more like-named components described with regard to any other figure. For brevity, descriptions of these components will not be repeated with regard to each figure. Thus, each and every embodiment of the components of each figure is incorporated by reference and assumed to be optionally present within every other figure having one or more like-named components. Additionally, in accordance with various embodiments of the invention, any description of the components of a figure is to be interpreted as an optional embodiment, which may be implemented in addition to, in conjunction with, or in place of the embodiments described with regard to a corresponding like-named component in any other figure.
  • Throughout this application, elements of figures may be labeled as A to N. As used herein, the aforementioned labeling means that the element may include any number of items and does not require that the element include the same number of elements as any other item labeled as A to N. For example, a data structure may include a first element labeled as A and a second element labeled as N. This labeling convention means that the data structure may include any number of the elements. A second data structure, also labeled as A to N, may also include any number of elements. The number of elements of the first data structure and the number of elements of the second data structure may be the same or different.
  • In general, embodiments of the invention relate to a method and system for storing data in a data cluster. Embodiments of the invention may utilize a deduplicator, operating in an accelerator pool, which applies an erasure coding procedure on data obtained from a host to divide the data into data chunks and to generate parity chunks using the data chunks. The deduplicator may then perform deduplication on the data chunks to generate deduplicated data that includes deduplicated data chunks. The deduplicated data chunks and the parity chunks are subsequently distributed to nodes in the data cluster in accordance with an erasure coding procedure.
  • In one or more embodiments of the invention, the deduplicator stores storage information that specifies the nodes in which each data chunk and parity chunk is stored. In this manner, if the accelerator pool obtains data that include modifications to previously stored data chunks, the modified data chunks may be sent to the appropriate nodes (i.e., the nodes on which prior versions of the specific data chunk or parity chunk are stored). In this manner, embodiments of the invention minimize the number of read and write operations that are required to write erasure coded deduplicated data to the non-accelerator pool. Said another way, by tracking to which node each data chunk and parity chunk is written to in the non-accelerator pool, one or more embodiments of the invention enable only portions of a stripe (i.e., a set of data chunks and parity chunks) to be written to the non-accelerator pool when a portion of the stripe is modified. These results in fewer read and write operations being performed as none of the prior stored data chunks need to be read from or re-written to the non-accelerator pool.
  • FIG. 1A shows an example system in accordance with one or more embodiments of the invention. The system includes a host (100) and a data cluster (110). The host (100) is operably connected to the data cluster (110) via any combination of wired and/or wireless connections.
  • In one or more embodiments of the invention, the host (100) utilizes the data cluster (110) to store data. The data stored may be backups of databases, files, applications, and/or other types of data without departing from the invention.
  • In one or more embodiments of the invention, the host (100) is implemented as a computing device (see e.g., FIG. 4). The computing device may be, for example, a laptop computer, a desktop computer, a server, a distributed computing system, or a cloud resource (e.g., a third-party storage system accessible via a wired or wireless connection). The computing device may include one or more processors, memory (e.g., random access memory), and persistent storage (e.g., disk drives, solid state drives, etc.). The computing device may include instructions, stored on the persistent storage, that when executed by the processor(s) of the computing device cause the computing device to perform the functionality of the host (100) described throughout this application.
  • In one or more embodiments of the invention, the host (100) is implemented as a logical device. The logical device may utilize the computing resources of any number of computing devices and thereby provide the functionality of the host (100) described throughout this application.
  • In one or more embodiments of the invention, the data cluster (110) stores data and/or backups of data generated by the host (100). The data and/or backups may be deduplicated versions of data obtained from the host. The data cluster may, via an erasure coding procedure, store portions of the deduplicated data across the nodes operating in the data cluster (110).
  • As used herein, deduplication refers to methods of storing only portions of files (also referred to as file segments or segments) that are not already stored in persistent storage. For example, when multiple versions of a large file, having only minimal differences between each of the versions, are stored without deduplication, storing each version will require approximately the same amount of storage space of a persistent storage. In contrast, when the multiple versions of the large file are stored with deduplication, only the first version of the multiple versions stored will require a substantial amount of storage. Once the first version is stored in the persistent storage, the subsequent versions of the large file subsequently stored will be de-duplicated before being stored in the persistent storage resulting in much less storage space of the persistent storage being required to store the subsequently stored versions when compared to the amount of storage space of the persistent storage required to store the first stored version.
  • Continuing with the discussion of FIG. 1A, the data cluster (110) may include nodes that each store any number of portions of data. The portions of data may be obtained by other nodes or obtained from the host (100). For additional details regarding the data cluster (110), see, e.g., FIG. 1B.
  • FIG. 1B shows a diagram of a data cluster (120) in accordance with one or more embodiments of the invention. The data cluster (120) may be an embodiment of the data cluster (110, FIG. 1A) discussed above. The data cluster (120) may include an accelerator pool (130) and a non-accelerator pool (150). The accelerator pool (130) may include a deduplicator(s) (132) and any number of data nodes (134, 136). Similarly, the non-accelerator pool (150) includes any number of data nodes (154, 156). The components of the data cluster (120) may be operably connected via any combination of wired and/or wireless connections. Each of the aforementioned components is discussed below.
  • In one or more embodiments of the invention, the deduplicator(s) (132) is a device that includes functionality to perform deduplication on data obtained from a host (e.g., 100, FIG. 1A). The deduplicator (132) may store information useful to perform the aforementioned functionality. The information may include deduplication identifiers (D-IDs). A D-ID is a unique identifier that identifies portions of the data (also referred to as data chunks) that are stored in the data cluster (120). The D-ID may be used to determine whether a data chunk of the obtained data is already present elsewhere in the accelerator pool (140) or the non-accelerator pool (150). The deduplicator (132) may use the information to perform the deduplication and generate deduplicated data (or a deduplicated backup). After deduplication, an erasure coding procedure may be performed on the deduplicated data in order to generate parity chunks. The deduplicator (132) may perform the deduplication and erasure coding procedure via the method illustrated in FIG. 2
  • In one or more of embodiments of the invention, the deduplicator (132) is implemented as computer instructions, e.g., computer code, stored on a persistent storage that when executed by a processor of a data node (e.g., 134, 136) of the accelerator pool (140) cause the data node to provide the aforementioned functionality of the deduplicator (132) described throughout this application and/or all, or a portion thereof, of the method illustrated in FIG. 2.
  • In one or more embodiments of the invention, the deduplicator (132) is implemented as a computing device (see e.g., FIG. 4). The computing device may be, for example, a laptop computer, a desktop computer, a server, a distributed computing system, or a cloud resource (e.g., a third-party storage system accessible via a wired or wireless connection). The computing device may include one or more processors, memory (e.g., random access memory), and persistent storage (e.g., disk drives, solid state drives, etc.). The computing device may include instructions, stored on the persistent storage, that when executed by the processor(s) of the computing device cause the computing device to perform the functionality of the deduplicator (132) described throughout this application and/or all, or a portion thereof, of the method illustrated in FIG. 2.
  • In one or more embodiments of the invention, the deduplicator (132) is implemented as a logical device. The logical device may utilize the computing resources of any number of computing devices and thereby provide the functionality of the deduplicator (132) described throughout this application and/or all, or a portion thereof, of the method illustrated in FIG. 2.
  • Continuing with the discussion of FIG. 1B, different data nodes in the cluster may include different quantities and/or types of computing resources, e.g., processors providing processing resources, memory providing memory resources, storages providing storage resources, communicators providing communications resources. Thus, the system may include a heterogeneous population of nodes.
  • The heterogeneous population of nodes may be logically divided into an accelerator pool (130) including nodes that have more computing resources, e.g., high performance nodes (134, 136) than other nodes and a non-accelerator pool (150) including nodes that have fewer computing resources, e.g., low performance nodes (154, 156) than the nodes in the accelerator pool (130). For example, nodes of the accelerator pool (130) may include enterprise class solid state storage resources that provide very high storage bandwidth, low latency, and high input-outputs per second (IOPS). In contrast, the nodes of the non-accelerator pool (150) may include hard disk drives that provide lower storage performance. While illustrated in FIG. 1B as being divided into two groups, the nodes may be divided into any number of groupings based on the relative performance level of each node without departing from the invention.
  • In one or more embodiments of the invention, the data nodes (134, 136, 154, 156) store data chunks and parity chunks. The data nodes (134, 136, 154, 156) may include persistent storage that may be used to store the data chunks and parity chunks. The generation of the data chunks and parity chunks is described below with respect to FIG. 2.
  • In one or more embodiments of the invention, the non-accelerator pool (150) includes any number of fault domains. In one or more embodiments of the invention, a fault domain is a logical grouping of nodes (e.g., data nodes) that, when one node of the logical grouping of nodes goes offline and/or otherwise becomes inaccessible, the other nodes in the logical grouping of nodes are directly affected. The effect of the node going offline to the other nodes may include the other nodes also going offline and/or otherwise inaccessible. The non-accelerator pool (150) may include multiple fault domains. In this manner, the events of one fault domain in the non-accelerator pool (150) may have no effect to other fault domains in the non-accelerator pool (150).
  • For example, two data nodes may be in a first fault domain. If one of these data nodes in the first fault domain experiences an unexpected shutdown, other nodes in the first fault domain may be affected. In contrast, another data node in the second fault domain may not be affected by the unexpected shutdown of a data node in the first fault domain. In one or more embodiments of the invention, the unexpected shutdown of one fault domain does not affect the nodes of other fault domains. In this manner, data may be replicated and stored across multiple fault domains to allow high availability of the data.
  • In one or more embodiments of the invention, each data node (134, 136, 154, 156) is implemented as a computing device (see e.g., FIG. 4). The computing device may be, for example, a laptop computer, a desktop computer, a server, a distributed computing system, or a cloud resource (e.g., a third-party storage system accessible via a wired or wireless connection). The computing device may include one or more processors, memory (e.g., random access memory), and persistent storage (e.g., disk drives, solid state drives, etc.). The computing device may include instructions, stored on the persistent storage, that when executed by the processor(s) of the computing device cause the computing device to perform the functionality of the data node (134, 136, 154, 156) described throughout this application and/or all, or a portion thereof, of the method illustrated in FIG. 2.
  • In one or more embodiments of the invention, the data nodes (134, 136, 154, 156) are implemented as a logical device. The logical device may utilize the computing resources of any number of computing devices and thereby provide the functionality of the data nodes (134, 136, 154, 156) described throughout this application and/or all, or a portion thereof, of the method illustrated in FIG. 2.
  • FIG. 2 shows a flowchart for storing data in a data cluster in accordance with one or more embodiments of the invention. The method shown in FIG. 2 may be performed by, for example, a deduplicator (132, FIG. 1B). Other components of the system illustrated in FIG. 1B may perform the method of FIG. 2 without departing from the invention. While the various steps in the flowchart are presented and described sequentially, one of ordinary skill in the relevant art will appreciate that some or all of the steps may be executed in different orders, may be combined or omitted, and some or all steps may be executed in parallel.
  • In step 200, data is obtained from a host. The data may be a file, a file segment, a collection of files, or any other type of data without departing from the invention. data cluster. The data may be obtained in response to a request to store data and/or backup the data. Other requests may be used to initiate the method without departing from the invention.
  • In step 202, confirmation is sent to the host. In one or more embodiments of the invention, the confirmation is an acknowledgement (ACK) that confirms receipt of the data by the data cluster. At this stage, from the perspective of the host, the data has been backed up. This is the case even though data cluster is still performing the method shown in FIG. 2.
  • In step 204, an erasure coding procedure is performed on the data to generate data chunks and parity chunks. In one or more embodiments of the invention, the erasure coding procedure includes dividing the obtained data into portions, referred to as data chunks. Each data chunk may include any number of data segments associated with the obtained data. The individual data chunks may then be combined (or otherwise grouped) into stripes (also referred to as Redundant Array of Independent Disks (RAID) stripes). One or more parity values are then calculated for each of the aforementioned stripes. The number of parity stripes may vary based on the erasure coding algorithm that is being used as part of the erasure coding procedure. Non-limiting examples of erasure coding algorithms are RAID-4, RAID-5, and RAID-6. Other erasing coding algorithms may be used without departing from the invention. Continuing with the above discussion, if the erasing code procedure is implementing RAID 4, then a single parity value is calculated. The resulting parity value is then stored in a parity chunk. If erasure coding procedure algorithm requires multiple parity values to be calculated, then the multiple parity values are calculated with each parity value being stored in a separate data chunk.
  • As discussed above, the data chunks are used to generate parity chunks in accordance with the erasure coding procedure. More specifically, the parity chunks may be generated by applying a predetermined function (e.g., P Parity function, Q Parity Function), operation, or calculation to at least one of the data chunks. Depending on the erasure coding procedure used, the parity chunks may include, but are not limited to, P parity values and/or Q parity values.
  • In one embodiment of the invention, the P parity value is a Reed-Solomon syndrome and, as such, the P Parity function may correspond to any function that can generate a Reed-Solomon syndrome. In one embodiment of the invention, the P parity function is an XOR function.
  • In one embodiment of the invention, the Q parity value is a Reed-Solomon syndrome and, as such, the Q Parity function may correspond to any function that can generate a Reed-Solomon syndrome. In one embodiment of the invention, a Q parity value is a Reed-Solomon code. In one embodiment of the invention, Q=g0·D0+g1·D1+g2D2+ . . . +gn−1·Dn−1, where Q corresponds to the Q parity, g is a generator of the field, and the value of D corresponds to the data in the data chunks.
  • In one or more embodiments of the invention, the number of data chunks and parity chunks generated is determined by the erasure coding procedure, which may be specified by the host, by the data cluster, and/or by another entity.
  • In step 206, deduplication is performed on the data chunks to obtain deduplicated data chunks. In one or more embodiments of the invention, the deduplication is performed in the accelerator pool by identifying the data chunks of the obtained data and assigning a fingerprint to each data chunk. A fingerprint is a unique identifier (e.g., a D-ID) that may be stored in metadata of the data chunk. The deduplicator performing the deduplication may generate a fingerprint for a data chunk and identify whether the fingerprint matches an existing fingerprint stored in the deduplicator. If the fingerprint matches an existing fingerprint, the data chunk may be deleted, as it is already stored in the data cluster. If the fingerprint does not match any existing fingerprints, the data chunk may be stored as a deduplicated data chunk. Additionally, the fingerprint is stored in the deduplicator for deduplication purposes of future obtained data.
  • In one or more embodiments of the invention, the deduplicated data chunks collectively make up the deduplicated data. In one or more embodiments of the invention, the deduplicated data chunks are the data chunks that were not deleted during deduplication.
  • In step 208, the deduplicated data chunks and parity chunks are stored across data nodes in different fault domains in a non-accelerator pool. As discussed above, the deduplicated data chunks and the parity chunks are stored in a manner that minimizes reads and writes from the non-accelerator pool. In one embodiment of the invention, this minimization is achieved by storing data chunks and parity chunks, which are collectively referred to as a stripe, in the same manner as a prior version of the stripe. The deduplicator may use, as appropriate, location information for the previously stored data chunks and parity chunks to determine where to store the data chunks and parity chunks in step 208.
  • More specifically, in one embodiment of the invention, if the deduplicated data chunks and parity chunks are the first version of a stripe (as opposed to a modification to an existing/previously stored stripe), then the deduplicated data chunks and parity chunks may be stored across the nodes (each in a different fault domain) in the non-accelerator pool. The location (or in this case the specific node) in which the data chunk or parity chunk is stored is tracked by the deduplicator. The scenario does not require the deduplicator to use location information for previously stored data chunks and parity chunks.
  • However, if the deduplicated data chunks and parity chunks are the second version of a stripe (e.g., a modification to a previously stored stripe), then the deduplicated data chunks and parity chunks are stored across the nodes (each in a different fault domain) in the non-accelerator pool using prior stored location information. The location (or in this case the specific node and/or fault domain) in which the data chunk or parity chunk is stored is tracked by the deduplicator.
  • For example, consider a scenario in which the first version of the stripe includes three data chunks (D1, D2, D3) and one parity chunk (P1) and that they were stored as follows: Node 1 stores D1, Node 2 stores D2, Node 3 stores D3, and Node 4 stores P1. Further, in this example, a second version of the stripe is received that includes three data chunks (D1, D2′, D3) and one newly calculated parity chunk (P1′). After deduplication only D2′ and P1′ need to be stored. Based on the prior storage locations (also referred to as locations) of the data chunks (D1, D2, and D3) and parity chunks (P1) for the first version of the stripe, D2′ is stored on Node 2 and P1′ is stored on Node 4. By storing the D2′ on Node 2 and P1′ on Node 4 the data chunks and parity chunks associated with the second stripe satisfy the condition that all data chunks and parity chunks for the second version of the stripe are being stored in separate fault domains. If the location information was not taken into account, then the entire stripe (i.e., D1, D2′, D3, and P1′) would need to be stored in order to guarantee that the requirement that all data chunks and parity chunks for the second version of the stripe are being stored in separate fault domains is satisfied.
  • In one or more embodiments of the invention, if the data node that obtains the deduplicated data chunk, which is a modified version of a prior stored deduplicated data chunk, then the data node may: (i) store the modified version of the deduplicated data chunk (i.e., the data node would include two versions of the data chunk) or (ii) store the modified version of the deduplicated data chunk and delete the prior version of the deduplicated data chunk.
  • In one embodiment of the invention, the deduplicator includes functionality to determine whether a given data chunk is a modified version of a previously stored data chunk. Said another way, after the data is received from a host divided into data chunks and grouped into stripes, the deduplicator includes functionality to determine whether a stripe is a modified version of a prior stored stripe. The deduplicator may use the fingerprints of the data chunks within the stripe to determine whether the stripe is a modified version of a prior stored stripe. Other methods for determining whether a data chunk is a modified version of a prior stored data chunk and/or whether a stripe is a modified version of a prior stripe without departing from the invention.
  • In step 210, location information in the deduplicator is updated using the location of the deduplicated data chunks and parity chunks. The location (or location) may be specified using a node identifier, a fault domain identifier (i.e., the fault domain in which the node storing the data chunk or parity chunk is located), or any other type of identifying information. The location information may be stored along with other chunk metadata, which may include, but is not limited to, a chunk type (e.g., data chunk or parity chunk), a deduplicated data chunk identifier (e.g., a D-ID) or parity chunk identifier (which may be generated for a parity chunk in the same manner as a D-ID for a data chunk), and the erasure coding information (e.g., information about the erasure code procedure, e.g., the erasure coding algorithm)
  • As discussed above, the data chunks and parity chunks may be stored in different fault domains. Storing the data chunks and parity chunks in multiple fault domains may be for recovery purposes. In the event that one or more fault domains storing data chunks or parity chunks become inaccessible, the data chunks and/or parity chunks stored in the remaining fault domains may be used to recreate the inaccessible data. In one embodiment of the invention, as part of (or in addition to) the chunk metadata, the deduplicator (or other computing device or logical device) tracks the members of each stripe (i.e., which data chunks and which parity chunks are part of a stripe). This information may be used to aid in any recover operation that is required to be performed on the data stored in the data cluster.
  • In one embodiment of the invention, the data that is originally obtained in step 200 and/or the deduplicated chunks obtained in step 206 may be: (i) stored on a node in the accelerator pool for a finite period of time (e.g., until it is determined that this data is no longer required in the accelerator pool, where this determination may be made based on a policy); (ii) stored on a node in the accelerator pool until the end of the step 208 and then deleted from the accelerator pool.
  • EXAMPLE
  • The following section describes an example. The example is not intended to limit the invention. The example is illustrated in FIGS. 3A-3C. Turning to the example, consider a scenario in which a data cluster obtains two backups from a single host at two points in time. The host may request the backups be stored in the data cluster in a 3:1 erasure coding scheme. FIG. 3A shows a diagram of the two backups at the two points in time. Backup A (300) may be obtained at a point in time T=1. Backup A (300) includes data that may be divided into data chunks A0 (302), A1 (304), and A2 (306). At a second point in time T=2, the data cluster obtains a second backup (310) that includes data that may be divided into data chunks A0 (312), A1′ (314), and A3 (316).
  • In this example, Backup B is a modified version of Backup A. Accordingly, assume that the data associated with data chunk A0 (312) of backup B (310) is identical to the data associated with data chunk A0 (302) of backup A (300). Similarly, the data associated with data chunk A2 (316) of backup B (310) is identical to the data associated with data chunk A2 (306) of backup A (300). In contrast, the data associated with data chunk A1′ (314) of backup B (310) is an update of data chunk A1 (304) of backup A (300). Finally, in this example, assume that the erasure coding process includes implementing RAID 4.
  • FIG. 3B shows the data cluster after backup A (300) is processed in accordance with FIG. 2. The data cluster may include an accelerator pool (320) that performs the method of FIG. 2 to generate deduplicated backup A (322) using backup A (300). The method may include dividing the backup into data chunks A0, A1, and A2, where these data chunks are associated with a first stripe. The aforementioned data chunks are then used to generate a parity chunk AP1 using RAID 3.
  • Because the deduplicated backup A (322) is the first backup stored in the data cluster, all three data chunks are distributed across nodes in the non-accelerator pool (330) as deduplicated data chunks (322A, 322B, 322C). Deduplicated data chunk A0 (322A) may be stored in a node A (332), deduplicated data chunk A1 (322B) may be stored in a node B (334), deduplicated data chunk A2 (322C) may be stored in a node C (336), and parity chunk AP1 (322D) may be stored in a node D (338). Each node (332, 334, 336, 338) may be a node in a unique fault domain. In this manner, each chunk (322A, 322B, 322C, 322D) is stored in a different fault domain.
  • The location of each deduplicated data chunk (322A, 322B, 322C) and parity chunk (322D) is stored in the deduplicator of the accelerator pool (320) as location information. The location information may include entries that each specify a deduplicated data chunk (322A, 322B, 322C) or the parity chunk AP1 (322D) and the data node (332, 334, 336, 338) storing the respective chunk.
  • At the second point in time T=2, backup B (310) is obtained by the accelerator pool (320). The backup B (310) may be divided into data chunks A0, A1′, and A2, where these data chunks are associated with a second stripe that is a modified version of the first stripe. The data chunks (A0, A1′, A2) may be used to generate a parity chunk AP1′. The data chunks in the second stripe are then deduplicated by the deduplicator. The result of the deduplication of the second stripe is that data chunks A0 and A2 exist in the non-accelerator pool and thus are deleted from the backup B.
  • The remaining chunks associated with the deduplicated backup B (324) may be stored in nodes of the non-accelerator pool (330) as deduplicated data chunks A1′ (324A) and AP1′ (324B). The accelerator pool (320) may use the location information, which specifies the location information of deduplicated data chunks (322A, 322B, 322C) and parity chunk (322D) of deduplicated backup A (322), to determine where to store the deduplicated data chunk (324A) and parity chunk (324B) of deduplicated backup B (324).
  • Using the location information, deduplicated data chunk A1′ (324A) is stored in node B (334), where deduplicated data chunk A1 (322B) is stored. Subsequently, deduplicated data chunk A1 (322B) may be deleted from node B (334). Similarly, parity chunk AP1′ (324B) is stored in node D (338). Further, parity chunk AP1 (322D) may be deleted from node D (338).
  • End of Example
  • As discussed above, embodiments of the invention may be implemented using computing devices. FIG. 4 shows a diagram of a computing device in accordance with one or more embodiments of the invention. The computing device (400) may include one or more computer processors (402), non-persistent storage (404) (e.g., volatile memory, such as random access memory (RAM), cache memory), persistent storage (406) (e.g., a hard disk, an optical drive such as a compact disk (CD) drive or digital versatile disk (DVD) drive, a flash memory, etc.), a communication interface (412) (e.g., Bluetooth interface, infrared interface, network interface, optical interface, etc.), input devices (410), output devices (408), and numerous other elements (not shown) and functionalities. Each of these components is described below.
  • In one embodiment of the invention, the computer processor(s) (402) may be an integrated circuit for processing instructions. For example, the computer processor(s) may be one or more cores or micro-cores of a processor. The computing device (400) may also include one or more input devices (410), such as a touchscreen, keyboard, mouse, microphone, touchpad, electronic pen, or any other type of input device. Further, the communication interface (412) may include an integrated circuit for connecting the computing device (400) to a network (not shown) (e.g., a local area network (LAN), a wide area network (WAN) such as the Internet, mobile network, or any other type of network) and/or to another device, such as another computing device.
  • In one embodiment of the invention, the computing device (400) may include one or more output devices (408), such as a screen (e.g., a liquid crystal display (LCD), a plasma display, touchscreen, cathode ray tube (CRT) monitor, projector, or other display device), a printer, external storage, or any other output device. One or more of the output devices may be the same or different from the input device(s). The input and output device(s) may be locally or remotely connected to the computer processor(s) (402), non-persistent storage (404), and persistent storage (406). Many different types of computing devices exist, and the aforementioned input and output device(s) may take other forms.
  • One or more embodiments of the invention may be implemented using instructions executed by one or more processors of the data management device. Further, such instructions may correspond to computer readable instructions that are stored on one or more non-transitory computer readable mediums.
  • One or more embodiments of the invention may improve the operation of one or more computing devices. More specifically, embodiments of the invention improve the efficiency of performing storage operations in a data cluster. The efficiency is improved by implementing erasure coding procedures and performing deduplication on data. The erasure coding procedure includes generating additional portions of data associated with the data. The deduplicated data and the additional portions of data may be stored across multiple fault domains. In this manner, if any number of fault domains become inaccessible prior to recovery of data, the data stored in the remaining fault domains may be used to recreate the data. This method may replace the need to store multiple copies of the same data across the fault domains, thus reducing the amount of storage used for storing data while maintaining policies in the event of fault domain failures.
  • Further, embodiments of the invention improve the storage and recovery operations by tracking the location of each portion of data (e.g., data chunks and parity chunks) stored in the data cluster. By monitoring tracking the location, embodiments of the invention may be used to send deduplicated data chunks and/or parity chunks to appropriate data nodes.
  • Thus, embodiments of the invention may address the problem of inefficient use of computing resources. This problem arises due to the technological nature of the environment in which data storage operations are performed.
  • The problems discussed above should be understood as being examples of problems solved by embodiments of the invention disclosed herein and the invention should not be limited to solving the same/similar problems. The disclosed invention is broadly applicable to address a range of problems beyond those discussed herein.
  • While the invention has been described above with respect to a limited number of embodiments, those skilled in the art, having the benefit of this disclosure, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as disclosed herein. Accordingly, the scope of the invention should be limited only by the attached claims.

Claims (20)

What is claimed is:
1. A method for storing data, the method comprising:
obtaining data;
applying an erasure coding procedure to the data to obtain a plurality of data chunks and a parity chunk;
deduplicating the plurality of data chunks to obtain a plurality of deduplicated data chunks;
storing, across a plurality of nodes, the plurality of deduplicated data chunks and the parity chunk; and
tracking location information for each of the plurality of deduplicated data chunks and the parity chunk.
2. The method of claim 1, further comprising:
obtaining second data;
applying the erasure coding procedure to the second data to obtain a second plurality of data chunks and a second parity chunk;
deduplicating the second plurality of data chunks to obtain a second plurality of deduplicated data chunks;
storing, across the plurality of nodes and using the location information for at least one of the plurality of deduplicated data chunks, the second plurality of deduplicated data chunks and the second parity chunk.
3. The method of claim 2,
wherein a first deduplicated data chunk of the first plurality of deduplicated data chunks is stored in a node of the plurality of nodes,
wherein a second deduplicated data chunk of the second plurality of deduplicated data chunks is a modified version of the first deduplicated data chunk, and
wherein storing, across the plurality of nodes and using the location information for at least one of the plurality of deduplicated data chunks, the second plurality of deduplicated data chunks and the second parity chunk comprises storing the second deduplicated data chunk on the node of the plurality of nodes.
4. The method of claim 3,
wherein the plurality of data chunks and the parity chunk are associated with a first stripe;
wherein the second plurality of data chunks and the second parity chunk is associated with a second stripe, wherein the second stripe is a modified version of the first stripe,
wherein storing, across the plurality of nodes and using the location information for at least one of the plurality of deduplicated data chunks, the second plurality of deduplicated data chunks and the second parity chunk further comprises storing the parity chunk and the second parity chunk on a second node of the plurality of nodes.
5. The method of claim 1, wherein the erasure coding procedure is applied by a deduplicator executing on a node in an accelerator pool, wherein the plurality of nodes is located is a non-accelerator pool, and wherein a data cluster comprises the accelerator pool and the non-accelerator pool.
6. The method of claim 1, wherein applying the erasure coding procedure comprises:
dividing the data into data chunks;
selecting, from the data chunks, the plurality of data chunks; and
generating the parity chunk using the plurality of data chunks.
7. The method of claim 1, wherein the parity chunk comprises a P parity value.
8. The method of claim 1, wherein each of the plurality of nodes is in a separate fault domain.
9. The method of claim 1, wherein deduplicating the plurality of data chunks to obtain the plurality of deduplicated data chunks is performed after a parity value for the plurality of data chunks is calculated.
10. A non-transitory computer readable medium comprising computer readable program code, which when executed by a computer processor enables the computer processor to perform a method for storing data, the method comprising:
obtaining data;
applying an erasure coding procedure to the data to obtain a plurality of data chunks and a parity chunk;
deduplicating the plurality of data chunks to obtain a plurality of deduplicated data chunks;
storing, across a plurality of nodes, the plurality of deduplicated data chunks and the parity chunk; and
tracking location information for each of the plurality of deduplicated data chunks and the parity chunk.
11. The non-transitory computer readable medium of claim 10, the method further comprising:
obtaining second data;
applying the erasure coding procedure to the data to obtain a second plurality of data chunks and a second parity chunk;
deduplicating the second plurality of data chunks to obtain a second plurality of deduplicated data chunks;
storing, across the plurality of nodes and using the location information for at least one of the plurality of deduplicated data chunks, the second plurality of deduplicated data chunks and the second parity chunk.
12. The non-transitory computer readable medium of claim 11,
wherein a first deduplicated data chunk of the first plurality of deduplicated data chunks is stored in a node of the plurality of nodes,
wherein a second deduplicated data chunk of the second plurality of deduplicated data chunks is a modified version of the first deduplicated data chunk, and
wherein storing, across the plurality of nodes and using the location information for at least one of the plurality of deduplicated data chunks, the second plurality of deduplicated data chunks and the second parity chunk comprises storing the second deduplicated data chunk on the node of the plurality of nodes.
13. The non-transitory computer readable medium of claim 12,
wherein the plurality of data chunks and the parity chunk are associated with a first stripe;
wherein the second plurality of data chunks and the second parity chunk is associated with a second stripe, wherein the second stripe is a modified version of the first stripe,
wherein storing, across the plurality of nodes and using the location information for at least one of the plurality of deduplicated data chunks, the second plurality of deduplicated data chunks and the second parity chunk further comprises storing the parity chunk and the second parity chunk on a second node of the plurality of nodes.
14. The non-transitory computer readable medium of claim 10, wherein the erasure coding procedure is applied by a deduplicator executing on a node in an accelerator pool, wherein the plurality of nodes is located is a non-accelerator pool, and wherein a data cluster comprises the accelerator pool and the non-accelerator pool.
15. The non-transitory computer readable medium of claim 10, wherein applying the erasure coding procedure comprises:
dividing the data into data chunks;
selecting, from the data chunks, the plurality of data chunks; and
generating the parity chunk using the plurality of data chunks.
16. The non-transitory computer readable medium of claim 10, wherein the parity chunk comprises a P parity value.
17. The non-transitory computer readable medium of claim 10, wherein each of the plurality of nodes is in a separate fault domain.
18. The non-transitory computer readable medium of claim 10, wherein deduplicating the plurality of data chunks to obtain the plurality of deduplicated data chunks is performed after a parity value for the plurality of data chunks is calculated.
19. A data cluster, comprising:
a plurality of data nodes comprising an accelerator pool and a non-accelerator pool, wherein the accelerator pool comprises a data node, and the non-accelerator pool comprises a plurality of data nodes;
wherein the data node of the plurality node is programmed to:
obtain data;
apply an erasure coding procedure to the data to obtain a plurality of data chunks and a parity chunk;
deduplicate the plurality of data chunks to obtain a plurality of deduplicated data chunks;
store, across a plurality of nodes, the plurality of deduplicated data chunks and the parity chunk; and
track location information for each of the plurality of deduplicated data chunks and the parity chunk.
20. The data cluster of claim 19, wherein the node is further programmed to:
obtain second data;
apply the erasure coding procedure to the second data to obtain a second plurality of data chunks and a second parity chunk;
deduplicate the second plurality of data chunks to obtain a second plurality of deduplicated data chunks; and
storing, across the plurality of nodes and using the location information for at least one of the plurality of deduplicated data chunks, the second plurality of deduplicated data chunks and the second parity chunk.
US16/260,734 2019-01-29 2019-01-29 Method and system for inline deduplication using erasure coding Abandoned US20200241781A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/260,734 US20200241781A1 (en) 2019-01-29 2019-01-29 Method and system for inline deduplication using erasure coding
US17/100,178 US11281389B2 (en) 2019-01-29 2020-11-20 Method and system for inline deduplication using erasure coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US16/260,734 US20200241781A1 (en) 2019-01-29 2019-01-29 Method and system for inline deduplication using erasure coding

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/100,178 Continuation US11281389B2 (en) 2019-01-29 2020-11-20 Method and system for inline deduplication using erasure coding

Publications (1)

Publication Number Publication Date
US20200241781A1 true US20200241781A1 (en) 2020-07-30

Family

ID=71731896

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/260,734 Abandoned US20200241781A1 (en) 2019-01-29 2019-01-29 Method and system for inline deduplication using erasure coding
US17/100,178 Active US11281389B2 (en) 2019-01-29 2020-11-20 Method and system for inline deduplication using erasure coding

Family Applications After (1)

Application Number Title Priority Date Filing Date
US17/100,178 Active US11281389B2 (en) 2019-01-29 2020-11-20 Method and system for inline deduplication using erasure coding

Country Status (1)

Country Link
US (2) US20200241781A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10998919B2 (en) * 2019-10-02 2021-05-04 Microsoft Technology Licensing, Llc Coded stream processing
US11068345B2 (en) * 2019-09-30 2021-07-20 Dell Products L.P. Method and system for erasure coded data placement in a linked node system
US11360949B2 (en) 2019-09-30 2022-06-14 Dell Products L.P. Method and system for efficient updating of data in a linked node system
US11422741B2 (en) 2019-09-30 2022-08-23 Dell Products L.P. Method and system for data placement of a linked node system using replica paths
US11481293B2 (en) 2019-09-30 2022-10-25 Dell Products L.P. Method and system for replica placement in a linked node system
US11604771B2 (en) 2019-09-30 2023-03-14 Dell Products L.P. Method and system for data placement in a linked node system
WO2024001126A1 (en) * 2022-06-28 2024-01-04 苏州元脑智能科技有限公司 Erasure code fusion method and system, electronic device and nonvolatile readable storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200403985A1 (en) * 2019-06-19 2020-12-24 Hewlett Packard Enterprise Development Lp Method for federating a cluster from a plurality of computing nodes
US11240345B2 (en) 2019-06-19 2022-02-01 Hewlett Packard Enterprise Development Lp Method for deploying an application workload on a cluster
CN114265551B (en) * 2021-12-02 2023-10-20 阿里巴巴(中国)有限公司 Data processing method in storage cluster, storage node and equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150058582A1 (en) * 2013-08-23 2015-02-26 International Business Machines Corporation System and method for controlling a redundancy parity encoding amount based on deduplication indications of activity
US20150161000A1 (en) * 2013-12-10 2015-06-11 Snu R&Db Foundation Nonvolatile memory device, distributed disk controller, and deduplication method thereof
US20160085630A1 (en) * 2014-09-22 2016-03-24 Storagecraft Technology Corporation Hash collision recovery in a deduplication vault
US20160246537A1 (en) * 2013-09-27 2016-08-25 Inha-Industry Partnership Institute Deduplication of parity data in ssd based raid systems
US20180018235A1 (en) * 2016-07-15 2018-01-18 Quantum Corporation Joint de-duplication-erasure coded distributed storage
US20190197023A1 (en) * 2014-06-17 2019-06-27 International Business Machines Corporation Placement of data fragments generated by an erasure code in distributed computational devices based on a deduplication factor

Family Cites Families (150)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4780809A (en) 1986-08-08 1988-10-25 Amdahl Corporation Apparatus for storing data with deferred uncorrectable error reporting
US5689678A (en) 1993-03-11 1997-11-18 Emc Corporation Distributed storage array system having a plurality of modular control units
US6098098A (en) 1997-11-14 2000-08-01 Enhanced Messaging Systems, Inc. System for managing the configuration of multiple computer devices
US6223252B1 (en) 1998-05-04 2001-04-24 International Business Machines Corporation Hot spare light weight mirror for raid system
US6636242B2 (en) 1999-08-31 2003-10-21 Accenture Llp View configurer in a presentation services patterns environment
US6516425B1 (en) 1999-10-29 2003-02-04 Hewlett-Packard Co. Raid rebuild using most vulnerable data redundancy scheme first
US20010044879A1 (en) 2000-02-18 2001-11-22 Moulton Gregory Hagan System and method for distributed management of data storage
US7136882B2 (en) 2001-07-31 2006-11-14 Hewlett-Packard Development Company, L.P. Storage device manager
US6978398B2 (en) 2001-08-15 2005-12-20 International Business Machines Corporation Method and system for proactively reducing the outage time of a computer system
US9087319B2 (en) 2002-03-11 2015-07-21 Oracle America, Inc. System and method for designing, developing and implementing internet service provider architectures
US7475126B2 (en) 2002-03-15 2009-01-06 Nortel Networks Limited Method and apparatus for system lineup and testing
US20040153844A1 (en) 2002-10-28 2004-08-05 Gautam Ghose Failure analysis method and system for storage area networks
US7159150B2 (en) 2002-12-31 2007-01-02 International Business Machines Corporation Distributed storage system capable of restoring data in case of a storage failure
US7434097B2 (en) 2003-06-05 2008-10-07 Copan System, Inc. Method and apparatus for efficient fault-tolerant disk drive replacement in raid storage systems
US7302436B2 (en) 2003-10-02 2007-11-27 Bayerische Motoren Werke Aktiengesellschaft Business workflow database and user system
JP2005122338A (en) 2003-10-15 2005-05-12 Hitachi Ltd Disk array device having spare disk drive, and data sparing method
US7440966B2 (en) 2004-02-12 2008-10-21 International Business Machines Corporation Method and apparatus for file system snapshot persistence
US7409582B2 (en) 2004-05-06 2008-08-05 International Business Machines Corporation Low cost raid with seamless disk failure recovery
US7313721B2 (en) 2004-06-21 2007-12-25 Dot Hill Systems Corporation Apparatus and method for performing a preemptive reconstruct of a fault-tolerant RAID array
US8849767B1 (en) 2005-04-13 2014-09-30 Netapp, Inc. Method and apparatus for identifying and eliminating duplicate data blocks and sharing data blocks in a storage system
US7636814B1 (en) 2005-04-28 2009-12-22 Symantec Operating Corporation System and method for asynchronous reads of old data blocks updated through a write-back cache
US7584338B1 (en) 2005-09-27 2009-09-01 Data Domain, Inc. Replication of deduplicated storage system
US9996413B2 (en) 2007-10-09 2018-06-12 International Business Machines Corporation Ensuring data integrity on a dispersed storage grid
US7721157B2 (en) 2006-03-08 2010-05-18 Omneon Video Networks Multi-node computer system component proactive monitoring and proactive repair
US9455955B2 (en) 2006-05-17 2016-09-27 Richard Fetik Customizable storage controller with integrated F+ storage firewall protection
US10180809B2 (en) 2006-05-17 2019-01-15 Richard Fetik Secure application acceleration system, methods and apparatus
US8086698B2 (en) 2006-06-02 2011-12-27 Google Inc. Synchronizing configuration information among multiple clients
US8520850B2 (en) 2006-10-20 2013-08-27 Time Warner Cable Enterprises Llc Downloadable security and protection methods and apparatus
US7769971B2 (en) 2007-03-29 2010-08-03 Data Center Technologies Replication and restoration of single-instance storage pools
US8838760B2 (en) 2007-09-14 2014-09-16 Ricoh Co., Ltd. Workflow-enabled provider
US8190835B1 (en) 2007-12-31 2012-05-29 Emc Corporation Global de-duplication in shared architectures
US7987353B2 (en) 2008-01-09 2011-07-26 International Business Machines Corporation Remote BIOS for servers and blades
US8234444B2 (en) 2008-03-11 2012-07-31 International Business Machines Corporation Apparatus and method to select a deduplication protocol for a data storage library
US7882386B1 (en) 2008-03-21 2011-02-01 Emc Corporaton System and method for recovering a logical volume during failover or reboot of a file server in a data storage environment
US8019728B2 (en) 2008-04-17 2011-09-13 Nec Laboratories America, Inc. Dynamically quantifying and improving the reliability of distributed data storage systems
US8788466B2 (en) 2008-08-05 2014-07-22 International Business Machines Corporation Efficient transfer of deduplicated data
US8099571B1 (en) 2008-08-06 2012-01-17 Netapp, Inc. Logical block replication with deduplication
US20100061207A1 (en) 2008-09-09 2010-03-11 Seagate Technology Llc Data storage device including self-test features
JP5396836B2 (en) 2008-12-01 2014-01-22 富士通株式会社 Data distribution control program, storage management program, control node, and disk node
US8161255B2 (en) 2009-01-06 2012-04-17 International Business Machines Corporation Optimized simultaneous storing of data into deduplicated and non-deduplicated storage pools
US8386930B2 (en) 2009-06-05 2013-02-26 International Business Machines Corporation Contextual data center management utilizing a virtual environment
US8489966B2 (en) 2010-01-08 2013-07-16 Ocz Technology Group Inc. Solid-state mass storage device and method for failure anticipation
US8762338B2 (en) 2009-10-07 2014-06-24 Symantec Corporation Analyzing backup objects maintained by a de-duplication storage system
US8321648B2 (en) 2009-10-26 2012-11-27 Netapp, Inc Use of similarity hash to route data for improved deduplication in a storage server cluster
US8868987B2 (en) 2010-02-05 2014-10-21 Tripwire, Inc. Systems and methods for visual correlation of log events, configuration changes and conditions producing alerts in a virtual infrastructure
US8880843B2 (en) 2010-02-10 2014-11-04 International Business Machines Corporation Providing redundancy in a virtualized storage system for a computer system
US8037345B1 (en) 2010-03-31 2011-10-11 Emc Corporation Deterministic recovery of a file system built on a thinly provisioned logical volume having redundant metadata
US9015268B2 (en) 2010-04-02 2015-04-21 Intel Corporation Remote direct storage access
US8898114B1 (en) 2010-08-27 2014-11-25 Dell Software Inc. Multitier deduplication systems and methods
US8417989B2 (en) 2010-10-15 2013-04-09 Lsi Corporation Method and system for extra redundancy in a raid system
US9278481B2 (en) 2010-10-26 2016-03-08 Rinco Ultrasononics USA, INC. Sonotrode and anvil energy director grids for narrow/complex ultrasonic welds of improved durability
US9183219B1 (en) 2011-04-18 2015-11-10 American Megatrends, Inc. Data migration between multiple tiers in a storage system using policy based ILM for QOS
US8874892B1 (en) 2011-05-26 2014-10-28 Phoenix Technologies Ltd. Assessing BIOS information prior to reversion
US8930307B2 (en) 2011-09-30 2015-01-06 Pure Storage, Inc. Method for removing duplicate data from a storage array
US8510807B1 (en) 2011-08-16 2013-08-13 Edgecast Networks, Inc. Real-time granular statistical reporting for distributed platforms
JP5768587B2 (en) 2011-08-17 2015-08-26 富士通株式会社 Storage system, storage control device, and storage control method
US20130067459A1 (en) 2011-09-09 2013-03-14 Microsoft Corporation Order-Independent Deployment Collections with Dependency Package Identifiers
US9256381B1 (en) 2011-09-29 2016-02-09 Emc Corporation Managing degraded storage elements in data storage systems
US8949208B1 (en) 2011-09-30 2015-02-03 Emc Corporation System and method for bulk data movement between storage tiers
US8886781B2 (en) 2011-12-13 2014-11-11 Microsoft Corporation Load balancing in cluster storage systems
US8799746B2 (en) 2012-06-13 2014-08-05 Caringo, Inc. Erasure coding and replication in storage clusters
US9164840B2 (en) 2012-07-26 2015-10-20 Lenovo Enterprise Solutions (Singapore) Pte. Ltd. Managing a solid state drive (‘SSD’) in a redundant array of inexpensive drives (‘RAID’)
US9830111B1 (en) 2012-08-08 2017-11-28 Amazon Technologies, Inc. Data storage space management
GB2505185A (en) 2012-08-21 2014-02-26 Ibm Creating a backup image of a first memory space in a second memory space.
US9898224B1 (en) 2012-09-12 2018-02-20 EMC IP Holding Company LLC Automatic adjustment of capacity usage by data storage optimizer for data migration
US9355036B2 (en) 2012-09-18 2016-05-31 Netapp, Inc. System and method for operating a system to cache a networked file system utilizing tiered storage and customizable eviction policies based on priority and tiers
US9348758B2 (en) 2012-09-24 2016-05-24 Sk Hynix Memory Solutions Inc. Virtual addressing with multiple lookup tables and RAID stripes
US10509776B2 (en) 2012-09-24 2019-12-17 Sandisk Technologies Llc Time sequence data management
EP2901284A4 (en) 2012-09-28 2016-06-01 Longsand Ltd Predicting failure of a storage device
US9660874B2 (en) 2012-12-13 2017-05-23 Level 3 Communications, Llc Devices and methods supporting content delivery with delivery services having dynamically configurable log information
US9448927B1 (en) 2012-12-19 2016-09-20 Springpath, Inc. System and methods for removing obsolete data in a distributed system of hybrid storage and compute nodes
US9696939B1 (en) 2013-03-14 2017-07-04 EMC IP Holding Company LLC Replicating data using deduplication-based arrays using network-based replication
US8902532B2 (en) 2013-03-20 2014-12-02 International Business Machines Corporation Write avoidance areas around bad blocks on a hard disk drive platter
KR20140117994A (en) 2013-03-27 2014-10-08 한국전자통신연구원 Method and apparatus for deduplication of replicated file
US9411523B2 (en) 2013-07-03 2016-08-09 Globalfoundries Inc. Redundant array of independent disks (RAID) system backup management
US20150046756A1 (en) 2013-08-08 2015-02-12 Lsi Corporation Predictive failure analysis to trigger rebuild of a drive in a raid array
US9582194B2 (en) 2013-09-27 2017-02-28 Veritas Technologies Llc Techniques for improving performance of a backup system
US9454434B2 (en) 2014-01-17 2016-09-27 Netapp, Inc. File system driven raid rebuild technique
WO2015114643A1 (en) 2014-01-30 2015-08-06 Hewlett-Packard Development Company, L.P. Data storage system rebuild
US9552261B2 (en) 2014-01-31 2017-01-24 International Business Machines Corporation Recovering data from microslices in a dispersed storage network
US20150227601A1 (en) 2014-02-13 2015-08-13 Actifio, Inc. Virtual data backup
US10339455B1 (en) 2014-03-24 2019-07-02 EMC IP Holding Company LLC Techniques for determining workload skew
US10528429B1 (en) 2014-03-31 2020-01-07 EMC IP Holding Company LLC Managing recovery of file systems
US9514012B2 (en) 2014-04-03 2016-12-06 International Business Machines Corporation Tertiary storage unit management in bidirectional data copying
US10002048B2 (en) 2014-05-15 2018-06-19 International Business Machines Corporation Point-in-time snap copy management in a deduplication environment
US20150356305A1 (en) 2014-06-05 2015-12-10 Cleversafe, Inc. Secure data access in a dispersed storage network
US10044795B2 (en) 2014-07-11 2018-08-07 Vmware Inc. Methods and apparatus for rack deployments for virtual computing environments
US9674043B2 (en) 2014-07-14 2017-06-06 Schneider Electric It Corporation Systems and methods for automatically clustering devices
US20160062832A1 (en) 2014-09-02 2016-03-03 Netapp. Inc. Wide spreading data storage architecture
US10380026B2 (en) 2014-09-04 2019-08-13 Sandisk Technologies Llc Generalized storage virtualization interface
US9122501B1 (en) 2014-09-08 2015-09-01 Quanta Computer Inc. System and method for managing multiple bios default configurations
US9678839B2 (en) 2014-09-12 2017-06-13 Microsoft Technology Licensing, Llc Scalable data storage pools
WO2016051512A1 (en) 2014-09-30 2016-04-07 株式会社日立製作所 Distributed storage system
US10257040B1 (en) 2014-11-10 2019-04-09 Amazon Technologies, Inc. Resource configuration history service
US10296219B2 (en) 2015-05-28 2019-05-21 Vmware, Inc. Data deduplication in a block-based storage system
US10597253B2 (en) 2015-07-28 2020-03-24 Janice Voncille Wilcoxon Automated aisle runner
US10091295B1 (en) 2015-09-23 2018-10-02 EMC IP Holding Company LLC Converged infrastructure implemented with distributed compute elements
US10013325B1 (en) 2015-09-29 2018-07-03 EMC IP Holding Company LLC Providing resiliency to a raid group of storage devices
US10013323B1 (en) 2015-09-29 2018-07-03 EMC IP Holding Company LLC Providing resiliency to a raid group of storage devices
US9823876B2 (en) 2015-09-29 2017-11-21 Seagate Technology Llc Nondisruptive device replacement using progressive background copyback operation
US10341185B2 (en) 2015-10-02 2019-07-02 Arista Networks, Inc. Dynamic service insertion
US9710367B1 (en) 2015-10-30 2017-07-18 EMC IP Holding Company LLC Method and system for dynamic test case creation and documentation to the test repository through automation
WO2017072933A1 (en) 2015-10-30 2017-05-04 株式会社日立製作所 Management system and management method for computer system
US9880903B2 (en) 2015-11-22 2018-01-30 International Business Machines Corporation Intelligent stress testing and raid rebuild to prevent data loss
US9513968B1 (en) 2015-12-04 2016-12-06 International Business Machines Corporation Dynamic resource allocation based on data transferring to a tiered storage
US20170192868A1 (en) 2015-12-30 2017-07-06 Commvault Systems, Inc. User interface for identifying a location of a failed secondary storage device
US20170192688A1 (en) 2015-12-30 2017-07-06 International Business Machines Corporation Lazy deletion of vaults in packed slice storage (pss) and zone slice storage (zss)
US9910748B2 (en) 2015-12-31 2018-03-06 Futurewei Technologies, Inc. Rebuilding process for storage array
CA2957584A1 (en) 2016-02-12 2017-08-12 Coho Data, Inc. Methods, systems, and devices for adaptive data resource assignment and placement in distributed data storage systems
US9749480B1 (en) 2016-03-31 2017-08-29 Kyocera Document Solutions Inc. Method that performs from scanning to storing scan data using scan cloud ticket
US10650007B2 (en) 2016-04-25 2020-05-12 Microsoft Technology Licensing, Llc Ranking contextual metadata to generate relevant data insights
US11112990B1 (en) 2016-04-27 2021-09-07 Pure Storage, Inc. Managing storage device evacuation
US10503413B1 (en) 2016-06-01 2019-12-10 EMC IP Holding Company LLC Methods and apparatus for SAN having local server storage including SSD block-based storage
US10102067B2 (en) 2016-07-14 2018-10-16 International Business Machines Corporation Performing a desired manipulation of an encoded data slice based on a metadata restriction and a storage operational condition
US10698780B2 (en) 2016-08-05 2020-06-30 Nutanix, Inc. Implementing availability domain aware replication policies
US10409778B1 (en) 2016-08-19 2019-09-10 EMC IP Holding Company LLC Data services for software defined storage system
US11625738B2 (en) 2016-08-28 2023-04-11 Vmware, Inc. Methods and systems that generated resource-provision bids in an automated resource-exchange system
US10452301B1 (en) 2016-09-29 2019-10-22 Amazon Technologies, Inc. Cluster-based storage device buffering
US11150950B2 (en) 2016-12-01 2021-10-19 Vmware, Inc. Methods and apparatus to manage workload domains in virtual server racks
US10567009B2 (en) 2016-12-06 2020-02-18 Nutanix, Inc. Dynamic erasure coding
US10241877B2 (en) 2016-12-12 2019-03-26 International Business Machines Corporation Data storage system employing a hot spare to proactively store array data in absence of a failure or pre-failure event
US10503611B1 (en) 2016-12-23 2019-12-10 EMC IP Holding Company LLC Data protection management for distributed storage
US10613935B2 (en) 2017-01-31 2020-04-07 Acronis International Gmbh System and method for supporting integrity of data storage with erasure coding
US20180260123A1 (en) 2017-03-07 2018-09-13 Lenovo Enterprise Solutions (Singapore) Pte. Ltd. SEPARATION OF DATA STORAGE MANAGEMENT ON STORAGE devices FROM LOCAL CONNECTIONS OF STORAGE DEVICES
US10768849B2 (en) 2017-03-29 2020-09-08 Amazon Technologies, Inc. Migration of information via storage devices
US10705911B2 (en) 2017-04-24 2020-07-07 Hewlett Packard Enterprise Development Lp Storing data in a distributed storage system
US10152254B1 (en) 2017-04-28 2018-12-11 EMC IP Holding Company LLC Distributing mapped raid disk extents when proactively copying from an EOL disk
US10769035B2 (en) 2017-04-28 2020-09-08 International Business Machines Corporation Key-value index recovery by log feed caching
US10949903B2 (en) 2017-05-05 2021-03-16 Servicenow, Inc. System, computer-readable medium, and method for blueprint-based cloud management
US10802727B2 (en) 2017-06-07 2020-10-13 ScaleFlux, Inc. Solid-state storage power failure protection using distributed metadata checkpointing
US10387673B2 (en) 2017-06-30 2019-08-20 Microsoft Technology Licensing, Llc Fully managed account level blob data encryption in a distributed storage environment
US10341841B2 (en) 2017-10-02 2019-07-02 Servicenow, Inc. Operation of device and application discovery for a managed network
US11334438B2 (en) 2017-10-10 2022-05-17 Rubrik, Inc. Incremental file system backup using a pseudo-virtual disk
US10817392B1 (en) 2017-11-01 2020-10-27 Pure Storage, Inc. Ensuring resiliency to storage device failures in a storage system that includes a plurality of storage devices
US20190050263A1 (en) 2018-03-05 2019-02-14 Intel Corporation Technologies for scheduling acceleration of functions in a pool of accelerator devices
JP6802209B2 (en) * 2018-03-27 2020-12-16 株式会社日立製作所 Storage system
CN110413216B (en) 2018-04-28 2023-07-18 伊姆西Ip控股有限责任公司 Method, apparatus and computer program product for managing a storage system
JP2019204278A (en) 2018-05-23 2019-11-28 富士通株式会社 Information processing system, information processing device, and program
US11405289B2 (en) 2018-06-06 2022-08-02 Gigamon Inc. Distributed packet deduplication
CN110737393B (en) 2018-07-20 2023-08-11 伊姆西Ip控股有限责任公司 Data reading method, apparatus and computer program product
US11132256B2 (en) 2018-08-03 2021-09-28 Western Digital Technologies, Inc. RAID storage system with logical data group rebuild
US11288250B2 (en) 2018-08-09 2022-03-29 Servicenow, Inc. Partial discovery of cloud-based resources
US11099934B2 (en) 2018-08-24 2021-08-24 International Business Machines Corporation Data rebuilding
US10956454B2 (en) 2018-09-25 2021-03-23 Microsoft Technology Licensing, Llc Probabilistically generated identity database system and method
CN111104244B (en) 2018-10-29 2023-08-29 伊姆西Ip控股有限责任公司 Method and apparatus for reconstructing data in a storage array set
US20200201837A1 (en) 2018-12-21 2020-06-25 Salesforce.Com, Inc. Live record invalidation
US10929256B2 (en) 2019-01-23 2021-02-23 EMC IP Holding Company LLC Proactive disk recovery of storage media for a data storage system
US10990480B1 (en) 2019-04-05 2021-04-27 Pure Storage, Inc. Performance of RAID rebuild operations by a storage group controller of a storage system
US10963345B2 (en) 2019-07-31 2021-03-30 Dell Products L.P. Method and system for a proactive health check and reconstruction of data
US11005468B1 (en) 2020-09-09 2021-05-11 Faraday Technology Corp. Duty-cycle correction circuit for DDR devices

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150058582A1 (en) * 2013-08-23 2015-02-26 International Business Machines Corporation System and method for controlling a redundancy parity encoding amount based on deduplication indications of activity
US20160246537A1 (en) * 2013-09-27 2016-08-25 Inha-Industry Partnership Institute Deduplication of parity data in ssd based raid systems
US20150161000A1 (en) * 2013-12-10 2015-06-11 Snu R&Db Foundation Nonvolatile memory device, distributed disk controller, and deduplication method thereof
US20190197023A1 (en) * 2014-06-17 2019-06-27 International Business Machines Corporation Placement of data fragments generated by an erasure code in distributed computational devices based on a deduplication factor
US20160085630A1 (en) * 2014-09-22 2016-03-24 Storagecraft Technology Corporation Hash collision recovery in a deduplication vault
US20180018235A1 (en) * 2016-07-15 2018-01-18 Quantum Corporation Joint de-duplication-erasure coded distributed storage

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11068345B2 (en) * 2019-09-30 2021-07-20 Dell Products L.P. Method and system for erasure coded data placement in a linked node system
US11360949B2 (en) 2019-09-30 2022-06-14 Dell Products L.P. Method and system for efficient updating of data in a linked node system
US11422741B2 (en) 2019-09-30 2022-08-23 Dell Products L.P. Method and system for data placement of a linked node system using replica paths
US11481293B2 (en) 2019-09-30 2022-10-25 Dell Products L.P. Method and system for replica placement in a linked node system
US11604771B2 (en) 2019-09-30 2023-03-14 Dell Products L.P. Method and system for data placement in a linked node system
US10998919B2 (en) * 2019-10-02 2021-05-04 Microsoft Technology Licensing, Llc Coded stream processing
WO2024001126A1 (en) * 2022-06-28 2024-01-04 苏州元脑智能科技有限公司 Erasure code fusion method and system, electronic device and nonvolatile readable storage medium

Also Published As

Publication number Publication date
US11281389B2 (en) 2022-03-22
US20210072912A1 (en) 2021-03-11

Similar Documents

Publication Publication Date Title
US11281389B2 (en) Method and system for inline deduplication using erasure coding
US10963345B2 (en) Method and system for a proactive health check and reconstruction of data
US8612699B2 (en) Deduplication in a hybrid storage environment
US11736447B2 (en) Method and system for optimizing access to data nodes of a data cluster using a data access gateway and metadata mapping based bidding in an accelerator pool
US11609820B2 (en) Method and system for redundant distribution and reconstruction of storage metadata
US11775193B2 (en) System and method for indirect data classification in a storage system operations
US10852989B1 (en) Method and system for offloading a continuous health-check and reconstruction of data in a data cluster
US11526284B2 (en) Method and system for storing data in a multiple data cluster system
US11442642B2 (en) Method and system for inline deduplication using erasure coding to minimize read and write operations
US11403182B2 (en) Method and system for any-point in time recovery within traditional storage system via a continuous data protection interceptor
US20210279137A1 (en) Method and system for managing a spare persistent storage device and a spare node in a multi-node data cluster
US11281535B2 (en) Method and system for performing a checkpoint zone operation for a spare persistent storage
US10977136B2 (en) Method and system for offloading a continuous health-check and reconstruction of data using compute acceleration devices on persistent storage devices
US11416357B2 (en) Method and system for managing a spare fault domain in a multi-fault domain data cluster
US11175842B2 (en) Method and system for performing data deduplication in a data pipeline
US20210278977A1 (en) Method and system for performing data deduplication and compression in a data cluster
US11372730B2 (en) Method and system for offloading a continuous health-check and reconstruction of data in a non-accelerator pool
US20220027080A1 (en) Method and system for a sequence aware data ingest and a sequence aware replication between data clusters
US11328071B2 (en) Method and system for identifying actor of a fraudulent action during legal hold and litigation
US20210034472A1 (en) Method and system for any-point-in-time recovery within a continuous data protection software-defined storage
US11895093B2 (en) Method and system for optimizing access to data nodes of a data cluster using a data access gateway
US11936624B2 (en) Method and system for optimizing access to data nodes of a data cluster using a data access gateway and bidding counters
US11882098B2 (en) Method and system for optimizing access to data nodes of a data cluster using a data access gateway and metadata mapping based bidding
US11288005B2 (en) Method and system for generating compliance and sequence aware replication in a multiple data cluster system
US11379315B2 (en) System and method for a backup data verification for a file system based backup

Legal Events

Date Code Title Description
AS Assignment

Owner name: DELL PRODUCTS L.P., MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PATEL, DHARMESH M.;ALI, RIZWAN;CHAGANTI, RAVIKANTH;SIGNING DATES FROM 20190125 TO 20190126;REEL/FRAME:048183/0835

AS Assignment

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., TEXAS

Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES, INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:049452/0223

Effective date: 20190320

AS Assignment

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., TEXAS

Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:053546/0001

Effective date: 20200409

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

AS Assignment

Owner name: DELL PRODUCTS L.P., MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PATEL, DHARMESH M.;ALI, RIZWAN;CHAGANTI, RAVIKANTH;SIGNING DATES FROM 20190125 TO 20190126;REEL/FRAME:054794/0124

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION