CN116700605A - System and method for heterogeneous storage systems - Google Patents

System and method for heterogeneous storage systems Download PDF

Info

Publication number
CN116700605A
CN116700605A CN202310200880.6A CN202310200880A CN116700605A CN 116700605 A CN116700605 A CN 116700605A CN 202310200880 A CN202310200880 A CN 202310200880A CN 116700605 A CN116700605 A CN 116700605A
Authority
CN
China
Prior art keywords
file
storage device
storage
data structure
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310200880.6A
Other languages
Chinese (zh)
Inventor
S·坎南
Y·任
R·皮丘曼尼
D·多明戈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Rutgers State University of New Jersey
Original Assignee
Samsung Electronics Co Ltd
Rutgers State University of New Jersey
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/900,830 external-priority patent/US11928336B2/en
Application filed by Samsung Electronics Co Ltd, Rutgers State University of New Jersey filed Critical Samsung Electronics Co Ltd
Publication of CN116700605A publication Critical patent/CN116700605A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0643Management of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Systems and methods for managing a storage system are disclosed. The storage system includes a first storage device and a second storage device different from the first storage device. A first storage operation for a first portion of a file is received and a data structure associated with the file is identified. Based on the data structure, the first storage device is identified as being for a first portion of the file. The first storage operation is sent to the first storage device. In response to the first storage operation, the first storage device updates or accesses the first portion of the file.

Description

System and method for heterogeneous storage systems
Cross reference to related applications
The present application claims priority and benefit from U.S. provisional application No. 63/316,403 entitled "SYSTEMS, METHODS, AND DEVICES FOR HETEROGENEOUS STORAGE SYSTEMS" filed 3/2022, and claims priority and benefit from U.S. provisional application No. 63/350,818 entitled "UNIFIED I/O LIBRARY FOR HETEROGENEOUS STORAGE SYSTEMS" filed 9/2022, and claims priority and benefit from U.S. provisional application No. 63/355,377 entitled "UNIFYING HETEROGENEOUS STORAGE SYSTEMS WITH HETEROGENEOUS I/O" filed 24/2022, each of which is incorporated herein by reference in its entirety.
Technical Field
One or more aspects in accordance with embodiments of the present disclosure relate to storage systems, and more particularly to management of heterogeneous storage systems.
Background
Big data applications may generate big data sizes that may need to be accessed and/or processed quickly. Such growth of big data applications may present challenges to traditional storage systems.
The above information disclosed in this background section is only for enhancement of understanding of the background of the disclosure and, therefore, it may contain information that does not form the prior art.
Disclosure of Invention
Embodiments of the present disclosure relate to a method for managing a storage system including a first storage device and a second storage device different from the first storage device. The method comprises the following steps: receiving a first storage operation for a first portion of a file; identifying a data structure associated with the file; identifying a first storage device for a first portion of a file based on a data structure; and transmitting the first storage operation to the first storage device, wherein the first storage device updates or accesses the first portion of the file in response to the first storage operation.
According to some embodiments, the data structure comprises a tree data structure having a first node and a second node, wherein the first node comprises first information about a first portion of the file and the second node comprises second information about a second portion of the file, wherein the second portion of the file is stored in the second storage device.
According to some embodiments, the first information identifies first metadata in a first file system and the second information identifies second metadata in a second file system different from the first file system.
According to some embodiments, the file is a logical file in a virtual namespace, wherein the logical file extracts the first file system and the second file system from the application.
According to some embodiments, the first storage operation is directed to a virtual namespace, and the method comprises: receiving a second storage operation directed to the virtual namespace; identifying a second storage device; and transmitting the second storage operation to the second storage device.
According to some embodiments, the method further comprises: receiving a second storage operation for a second portion of the file; identifying a data structure associated with the file; identifying a second storage device for a second portion of the file based on the data structure; and transmitting the second storage operation to the second storage device, wherein the second storage device updates the second portion of the file concurrently with the first storage device updating the first portion of the file in response to the second storage operation.
According to some embodiments, the method further comprises: identifying a first processing thread assigned to a first storage device; determining a throughput of a first processing thread; and reassigning the first processing thread to the second storage device based on the throughput.
According to some embodiments, the method further comprises: calculating the utilization rate of the processing resources by the first processing thread; determining availability of processing resources in a first processing group; and borrowing processing resources from the second processing group in response to the determination.
According to some embodiments, the first portion of the file comprises a first data block of the file and the second portion of the file comprises a second data block of the file.
According to some embodiments, the first storage device is a non-volatile memory device and the second storage device is a solid state drive.
Embodiments of the present disclosure also relate to a system for managing a storage system that includes a first storage device and a second storage device that is different from the first storage device. The system includes a processor and a memory. The memory stores instructions that, when executed by the processor, cause the processor to: receiving a first storage operation for a first portion of a file; identifying a data structure associated with the file; identifying a first storage device for a first portion of a file based on a data structure; and transmitting the first storage operation to the first storage device, wherein the first storage device updates or accesses the first portion of the file in response to the first storage operation.
These and other features, aspects, and advantages of the embodiments of the present disclosure will become more fully understood when considered in connection with the following detailed description, appended claims, and accompanying drawings. The actual scope of the invention is, of course, defined in the appended claims.
Drawings
Non-limiting and non-exhaustive embodiments of the present invention are described with reference to the following figures, wherein like reference numerals refer to like parts throughout the various views unless otherwise specified.
FIG. 1 is a block diagram of a computing system including a host computing device coupled to a heterogeneous storage system, according to one embodiment;
FIG. 2 is a block diagram of a heterogeneous storage manager according to one embodiment;
FIG. 3 is a conceptual diagram of a data structure generated by a data structure manager of a logical file according to one embodiment;
FIG. 4 is a flow diagram for processing a store operation, according to one embodiment;
FIG. 5 is a flowchart of a process for updating the data structure of FIG. 3 based on I/O requests associated with one or more data blocks, according to one embodiment;
FIG. 6 is a flow diagram of a process for dynamic I/O placement, according to one embodiment; and
FIG. 7 is a flowchart of a process for dynamically allocating CPU resources to I/O threads, according to one embodiment.
Detailed Description
Hereinafter, example embodiments will be described in more detail with reference to the drawings, wherein like reference numerals denote like elements throughout. This disclosure may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided as examples so that this disclosure will be thorough and complete, and will fully convey the aspects and features of the disclosure to those skilled in the art. Thus, processes, elements, and techniques not necessary for a complete understanding of aspects and features of the present disclosure by those of ordinary skill in the art may not be described. Unless otherwise indicated, like reference numerals designate like elements throughout the drawings and written description, and thus, the description thereof may not be repeated. In addition, in the drawings, the relative sizes of elements, layers and regions may be exaggerated and/or simplified for clarity.
Large-scale systems may require fast access and processing of large amounts of data. To address this need, input and/or output (I/O) hardware storage stacks may be implemented in a heterogeneous manner. The heterogeneous storage system may include different heterogeneous storage devices, such as, for example, fast but more expensive storage devices, such as persistent storage (PM), and slower but higher capacity devices, such as non-volatile memory fast (NVMe) Solid State Drives (SSDs) or mechanical drives.
Some prior art systems may manage storage heterogeneity by employing techniques such as caching or layering. In one caching scheme, a faster storage (e.g., PM) may be used as the cache and a slower storage device (e.g., SSD or hard disk) may be used as the backup storage. In one hierarchical approach, the use of data is evaluated to determine the placement of the data in different storage devices. For example, data may be evaluated such that active or frequently accessed data (also referred to as "hot" data) is placed in faster storage and inactive or less frequently accessed data (also referred to as "cold" data) is placed in slower storage.
While prior art systems are useful in managing heterogeneous storage devices, they have drawbacks in maximizing I/O performance, reducing I/O magnification, and the like. For example, current technology solutions may be limited to using a single storage (e.g., faster storage) along a critical path, but fail to take advantage of the cumulative bandwidth provided by other storage devices. For example, in a cache solution, all updates may occur on the fast storage. In a hierarchical solution, hot and cold objects (or files) can be accessed simultaneously from fast and slow storage devices; however, access to multiple hot or cold objects in a fast or slow storage device may still be limited to a single storage device, thereby preventing cumulative bandwidth usage.
Another disadvantage of prior art systems is high I/O amplification, which may result in writing or reading the same data from multiple storage devices. The hierarchical mechanism may also suffer from rigid data movement policies that may require moving an entire file or object across various storage devices.
In general, embodiments of the present disclosure relate to systems and methods for managing I/O requests (e.g., data access/reads and placement/writes) in heterogeneous storage systems. In one embodiment, the heterogeneous storage manager provides a unified namespace for different storage devices that extracts (abstrates) different file system namespaces and/or storage devices from applications accessing the storage devices.
I/O requests from applications may be directed to logical files in a unified namespace. However, portions of the logical files may be stored as separate files in different file systems of different storage devices. Thus, having a unified namespace may allow the storage manager to transparently utilize both storage devices without changing their file systems. Individual files may be accessed via a library-level data structure, such as an indirection table, which may provide the current storage location(s) of the logical file.
In one embodiment, the storage manager allows fine-grained placement or movement of data blocks or block ranges (e.g., file extents) across different storage media in a concurrent manner. This helps to achieve cumulative bandwidth across different storage media. In one embodiment, the storage manager maintains information about file blocks to be updated or accessed by one or more threads in a tree data structure. The tree data structure may be, for example, an interval tree (interval tree) indexed by an interval range.
In one embodiment, an interval tree is used for conflict resolution. For example, an interval tree may be used to identify I/O requests from one application thread that may conflict with requests from other application threads. The storage manager may create an interval tree during creation of the file. In one embodiment, processes or threads sharing a file may also share an interval tree of files.
In one embodiment, an application thread that sends an I/O request to update a block/range of a file may acquire a global read-write lock for an interval tree. The application thread may locate the nodes of the tree containing the blocks/scopes to facilitate updating of the nodes, or generate new tree nodes that may be indexed by the requested blocks/scopes.
Once the interval tree nodes are updated, I/O requests for blocks/extents are dispatched to the storage devices based on the device's physical file descriptor. The physical file descriptors may be stored in an interval tree. In one embodiment, during dispatch of an I/O request, a lock per node is acquired, releasing the global read-write lock. This allows multiple threads to perform I/O on disk for disjoint block ranges simultaneously.
In one embodiment, the storage manager is configured to dynamically adjust placement of data blocks across one or more storage devices based on I/O traffic. Such dynamic placement policies may allow for efficient utilization of multiple storage devices and efficient utilization of one or more Central Processing Units (CPUs) across multiple storage devices. For example, threads with relatively high CPU utilization may be dynamically allocated more CPU resources than threads with relatively low CPU utilization, and vice versa. Dynamic CPU assignment may help avoid underutilization of one or more CPUs when compared to static data placement policies in which a static number of CPUs are assigned to each storage device regardless of current I/O traffic.
In one embodiment, the storage manager also provides a fairly lightweight crash consistency across multiple storage devices (crash consistency). In this regard, the interval tree may be a persistent interval tree to which an application thread may directly add or update an interval tree range to obtain durability and recovery. In one embodiment, nodes of a tree are allocated from contiguous memory mapped regions in a non-volatile memory (NVM). Each interval tree node may point to the next node using a physical offset from the root node's starting address rather than using a virtual address. The iterative persistent interval tree node may require an offset from the starting address of the mapping region to the root node. In the event of a failure, the offsets of the root nodes in the log may be used to remap the persistent interval tree, and other interval tree nodes may be restored.
FIG. 1 is a block diagram of a computing system 100 including a host computing device 102 coupled to a heterogeneous storage system 104, according to one embodiment. Computing system 100 may be a desktop computer, a laptop computer, a web server, a mobile device, an embedded computer, etc.
The host computing device 102 may include a processor 106 and a memory 108. The processor 106, also referred to as a Central Processing Unit (CPU), may include one or more processing cores configured to execute program instructions stored in the memory 108. In one embodiment, the processor 106 is configured to execute one or more software applications and/or processing threads 110a-110c (collectively 110). The application/thread 110 may provide I/O requests (e.g., read requests, write requests, etc.) to the storage system 104.
The memory 108 of the host computing device 102 may include one or more volatile and/or nonvolatile memories including, but not limited to, random Access Memory (RAM) (e.g., dynamic Random Access Memory (DRAM)), read Only Memory (ROM), and the like. The memory may store instructions for performing various operations, including management operations of the storage system 104. The management operations may include reading data, writing data, or erasing data of the storage system 104, and/or other similar operations.
In one embodiment, heterogeneous storage system 104 includes one or more storage devices 112a-112c (collectively 112). In one embodiment, at least one of the storage devices 112 is heterogeneous with another of the storage devices. In this regard, at least one of the storage devices 112 has different storage characteristics (e.g., bandwidth, latency, capacity, etc.) than another of the storage devices. Storage devices 112 included in heterogeneous storage system 104 may include, but are not limited to, non-volatile memory (e.g., PM), SSD (e.g., NVMe SSD), hard Disk Drive (HDD), and the like. Different storage devices 112 may have corresponding file systems 114a-114c (collectively 114). The file system 114 may store metadata for the files for controlling how the files are stored and retrieved from the corresponding storage devices 112.
In one embodiment, various management operations of heterogeneous storage system 104 are performed by storage manager 116. For example, the storage manager 116 may determine where in the heterogeneous storage system 104 the data is to be placed, as well as the level of granularity at which the data is placed. The storage manager 116 may be implemented as middleware software stored in a user or kernel space of the memory 108 for managing communications between the applications 110 and the storage system 104. The management of the storage system 104 by the storage manager 116 may be transparent to the application 110.
In one embodiment, the storage manager 116 provides one or more uniform namespaces that decouple the application 110 from the file system 114 and/or the storage device 112. In this regard, the storage manager 116 provides a single directory structure that may span one or more storage devices 112 while allowing the storage devices to maintain their own file system 114. For example, the storage manager 116 may store different portions of an entire file in different ones of the heterogeneous storage devices 112, but present the portions as a single logical file in a unified namespace (e.g., in a single directory structure). The application 110 may then direct the read and write commands to the logical file based on the file descriptors in the unified namespace.
In one embodiment, the storage manager 116 is configured to employ a fine-grained data placement policy that may facilitate utilizing the cumulative storage bandwidth of the heterogeneous storage devices 112. The fine-grained data placement policy may be byte level, block level, etc. In one embodiment, the storage manager maintains a data structure of the logical file. The data structure may store mapping information between a portion of a logical file (e.g., a byte, a block, or a range of bytes or blocks) to a corresponding storage device. The data structure may allow multiple applications/threads 110 to access different portions of the logical file simultaneously across heterogeneous storage devices 112, thereby allowing the cumulative storage bandwidth of storage system 104 to be utilized.
In one embodiment, the data structure is a persistent data structure that supports crashed consistency and durability of the data structure. In this regard, nodes of the data structure may be allocated from contiguous memory mapped regions in the NVM. The interval tree node may point to the next node using a physical offset from the root node's starting address rather than using a virtual address. In the event of a failure, the persistent data structure may be remapped using the offsets stored in the log.
In one embodiment, the storage manager 110 is configured with dynamic data placement policies for load balancing and for maximizing storage write bandwidth of the heterogeneous storage devices 112. With respect to load balancing, dynamic data placement may collect I/O throughput information to reassign I/Os to different storage devices based on load considerations. Dynamic data placement policies may also allow for reallocation of CPU resources that handle I/O operations to maximize CPU resource usage.
For example, when employing a static data placement policy instead of a dynamic data placement policy, one or more CPU cores of the processor 106 may be statically assigned to one of the storage devices 112 for processing write operations to the storage device. Due to the static assignment, one or more CPU cores may not be used by other storage devices even when the one or more CPU cores are inactive, which may result in underutilization of the one or more CPU cores.
In one embodiment, dynamic scheduling of the CPU cores of the processor 106 allows a storage device 112 with a relatively high CPU utilization to have more CPU resources allocated to the device than a storage device with a low CPU utilization. In this regard, the storage manager 110 is configured to collect aggregate I/O throughput of the various CPU resources and dynamically reassign substantially inactive CPU resources assigned to one of the storage devices 112 (e.g., NVM storage) to another of the storage devices (e.g., SSD). Dynamic CPU allocation may occur when the memory cache of the memory is full and data needs to be placed into one of the storage devices 112.
FIG. 2 is a block diagram of heterogeneous storage manager 116, according to one embodiment. Storage manager 116 includes, but is not limited to, a namespace manager 200, a data structure manager 202, and an I/O placement engine 204. Although the various components of fig. 2 are assumed to be separate functional units, those skilled in the art will recognize that the functionality of the components may be combined or integrated into a single component or further sub-divided into additional sub-components without departing from the spirit and scope of the inventive concept.
The namespace manager 200 may be configured to provide one or more unified namespaces to the application 110 for proposing the physical location(s) of the file. In some embodiments, namespace manager 200 provides a first unified namespace for a first logical file system and a second unified namespace for a second logical file system. The application 110 may direct the I/O request to the first unified namespace and/or the second unified namespace.
In one embodiment, when the application 110 creates a new file, the namespace manager 200 generates a logical file with a logical file name in the logical file system. However, different portions of the file may be stored in different physical locations (e.g., in different heterogeneous storage devices 112) as separate physical files. In one embodiment, the name/descriptor of the logical file is returned to the application 110 for extraction of the physical file in a different storage device 112 that may store different portions of the file. The application 110 may direct I/O requests/commands (e.g., read/write commands) to the logical file.
In one embodiment, when the application 110 generates a file, the data structure manager 202 generates a data structure 208 for the logical file. The data structure 208 may be associated with logical file names in a unified namespace. The application 110 sharing logical files may also share the data structures 208 of the files. The data structure 208 may be, for example, a tree structure.
In one embodiment, the data structure identifies where different portions of the logical file are stored across heterogeneous storage device 112. One or more of the heterogeneous storage devices 112 may generate file metadata regarding the physical files that store portions of the logical files. The file metadata may be stored in a file system 114 corresponding to the storage device 112. Metadata may be stored in an index node (inode), but embodiments are not limited thereto. The metadata may identify the physical location of one or more blocks of the physical file in the storage device 112.
In one embodiment, the data structure 208 is used for conflict resolution. For example, the data structure 208 may be used to identify updates to blocks/extents of files that conflict with requests from other threads. Global read-write locks may be used to resolve conflicts. In this regard, the thread that obtains the global read-write lock may update the data structure 208 of the block/extent of the file to be updated. During dispatch of an I/O request, a thread may acquire a lock (referred to as a lock per node) of a node containing a block/range of files to be updated, releasing the global read-write lock.
In one embodiment, the data structure manager 202 provides crash consistency and durability. In this regard, the data structure manager 202 allows for multiple levels of endurance across different storage devices 112. For example, a first type of data storage device may provide weaker data endurance than a second type of data storage device, but stronger data endurance than a third type of data storage device when data is spread across different storage devices 112. In this case, a minimum granularity model may be employed to provide minimum metadata durability for files stored across the first type of data storage device, the second type of data storage device, and the third type of data storage device. Alternatively, data (e.g., database data) for which maximum data durability may be expected may be recorded to the type of data storage device supporting the maximum data durability.
To provide maximum endurance for cache pages for SSDs, an add-only (add-only) journaling protocol may be employed for crash consistency and to prevent data loss from volatile processor caches that use cache line write back (cache line write back, CLWB) and memory fence (memory fence) to issue persistent writes. The commit flag may be set after the request is added to the IO queue. After the failure, the request with the commit flag unset may be discarded during recovery.
In some embodiments, the data structure manager 202 allows for crash consistency and durability of the data structures 208 generated for the file and the data stored in the cache 210. In one embodiment, data structure 208 is a persistent data structure to which an application thread may be directly added or updated, with durability and recovery. In this regard, nodes of the data structure may be allocated from contiguous memory mapped regions in the NVM. The node of the data structure 208 may point to the next node using a physical offset from the root node's starting address rather than using a virtual address. In the event of a failure, the persistent data structure may be remapped using the offsets stored in the log.
In one embodiment, the I/O placement engine 204 is configured with a data placement algorithm for placing/writing data across one or more of the heterogeneous storage devices 112. When the cache 210 storing data generated by the various applications 110 is full, a data placement mechanism may be invoked. In this case, the I/O fabric placement engine 204 may invoke a data placement algorithm to move at least some of the data in the cache 210 to one or more of the heterogeneous storage devices 112. In this regard, the OS-level virtual file system and cache layer for a block file system may utilize coarse-grained node-level locks to impose scalability bottlenecks, which may prevent multiple threads from operating on a file at the same time. The unified namespace and metadata management layer provided by the storage manager 116 that manages both fine-grained data placement and cached data in the DRAM can help solve bottleneck problems. In this regard, to reduce bottlenecks in the OS level cache, embodiments of the present disclosure provide a scalable application level cache that can address the concurrency bottleneck of the OS cache and the overhead of user level to OS context switching.
In one embodiment, the I/O placement engine 204 determines the granularity of data to be stored in the different storage devices 112, and/or the type of storage device to store the data. In one embodiment, the data placement algorithm may allow data to be placed at a substantially fine-grained level (byte level, block level, etc.) to allow different bytes, blocks, or block ranges of the logical file to be accessed simultaneously by multiple applications/threads 110.
In one embodiment, the data placement algorithm considers factors such as load and performance characteristics (e.g., bandwidth, latency, etc.) of the storage device 112 in determining how and where to store the different portions of the file. For example, hot portions of frequently accessed files may be stored in NVM, while cold portions of files may be stored in SSD. In another example, if there are more threads to read and write to the first storage device than to the second storage device, one or more blocks of the file may be stored in the second storage device more than the first storage device. Based on these factors, the data placement algorithm may dynamically place a first I/O request directed to the namespace to a first storage device and place a second I/O request directed to the namespace to a second storage device.
In one embodiment, the I/O placement engine 204 is configured to manage CPU utilization across heterogeneous storage devices 112. In one embodiment, the data placement algorithm dynamically assigns CPU resources to one or more processing threads 112a-112c (collectively 112) that issue I/O requests to the storage device 112. For example, the I/O request may be to place data in cache 210 in storage device 112.
In one embodiment, a data placement algorithm is used to maximize the write bandwidth of the heterogeneous storage system 104. In this regard, the processing threads 212 may be assigned to one or more of the storage devices 112 to write data to the assigned storage (e.g., during an add operation). If one of the processing threads 212 assigned to a first one of the storage devices 112 (e.g., a relatively fast storage device) is active below a threshold when an I/O request is issued, then the thread may be assigned to a second one of the storage devices (e.g., a relatively slow storage device). After a thread is assigned to a particular type of storage device, CPU resources may be allocated to the thread. In this regard, the CPU resources are divided into two CPU groups-a fast CPU group and a slow CPU group. The group from which CPU resources are allocated to a thread may depend on whether the thread is assigned to fast or slow storage. If more threads are assigned to the fast storage than the number of CPUs in the fast group, the I/O placement engine may borrow CPUs from the slow group, so long as the minimum number of CPUs remain in the slow group.
FIG. 3 is a conceptual diagram of a data structure 300 generated by the data structure manager 202 of a logical file according to one embodiment. The data structure 300 may be similar to the data structure 208 of fig. 2.
In one embodiment, the data structure 300 is a tree data structure having one or more nodes 302a, 302b (collectively 302). More specifically, the data structure may be an interval tree, where nodes 302 of the tree may represent physical files in the storage device 112. The physical file may store a portion of the entire logical file. For example, a first node 302a may represent a first physical file in a first storage device 112 (e.g., NVM) and a second node 302b may represent a second physical file in a second storage device (e.g., SSD). The first physical file and the second physical file may be represented as a single logical file in a virtual file system. In one embodiment, node 302 stores links to metadata stored in file system 114 that corresponds to a storage device storing a corresponding physical file.
In one embodiment, the nodes 304 of the tree include a span range having a low value and a high value. The span range may represent a range of data blocks 304a-304d (collectively 304) included in the node 304. One or more of the data blocks 304 may store an offset and size range that may be mapped to one or more corresponding data blocks of a physical file in the corresponding storage device 112. For example, data blocks 304a-304c may store an offset and size range that may be linked to one or more data blocks in a first storage device (e.g., NVM), and data block 304d may store an offset and size range that may be linked to a range (e.g., file extent) of data blocks in a second storage device (e.g., SSD).
In one embodiment, one or more application/processing threads 306a, 306b (collectively 306) may issue one or more I/O requests directed to a logical file. In one embodiment, the I/O requests are processed by the storage manager 116. One or more applications 306 may be similar to application 110 of fig. 1 and/or processing thread 212 of fig. 2.
In one embodiment, the storage manager 116 processes I/O requests identifying the data structure 300 corresponding to the logical file of the I/O request. The data structure may be used to determine whether an I/O request conflicts with an I/O request from another one of the processing threads 306. In one embodiment, the processing thread 306 that acquired the global read-write lock may update the data structure 300. Processing thread 306 may update the data structure by adding data blocks, such as I/O requests, that exist in data structure 300. The data block may be added to one of the nodes 302 based on a placement algorithm of the I/O placement engine 204.
In one embodiment, when an I/O request is dispatched to a storage device 112 that is mapped to a data block corresponding to the I/O request, the global read-write lock is released and the per-node lock is acquired. This may allow multiple applications to perform I/O operations on the storage device 112 for disjoint block ranges in a concurrent manner, thereby overcoming the concurrency bottleneck of the underlying file system 114. For example, in the example of FIG. 3, processing thread 306a may perform I/O operations for data block 304a while processing thread 306b performs I/O operations for data block 304 d.
FIG. 4 is a flow diagram for processing a store operation, according to one embodiment. The storage operation may be, for example, a read or write operation by one of applications 306 to a portion of a logical file in a virtual file system. The process begins and in act 400, the storage manager 116 receives a storage operation for the portion of the logical file.
In act 402, the storage manager 116 identifies a data structure 300 (e.g., a tree structure) associated with the logical file. Application 306 may search data structure 300 for nodes (similar to node 302) that contain portions of the logical file for the storage operation (e.g., similar to data block 304) and perform the update of the data structure. The update may be, for example, generating a new node and/or data block corresponding to a portion of the logical file associated with the storage operation, or updating an existing node and/or data block. For example, a file containing a plurality of user information may be updated to change the information of one of the users. This may result in an update of an existing data block. In another example, an existing node may be updated if a data block associated with the node has moved to a different location. In one embodiment, the application 306 obtains a global read-write lock for the data structure when updating the interval tree node.
In act 404, the storage manager 116 identifies the storage device 112 and/or the block/block range in the storage device 112 mapped to the portion of the logical file to which the storage operation was directed based on the data structure.
In act 406, the storage operation is routed to the storage device for execution. For example, the storage manager 116 may add storage operations to the I/O queue. The controller of the storage device 112 may retrieve the storage operation from the I/O queue and output a result (e.g., a read or write result) in response. In one embodiment, when a storage operation is routed to a storage device, the global read-write lock is released and a per-node lock is acquired for the node containing the affected data block. In this way, another application may perform another storage operation on a different range of data blocks simultaneously.
FIG. 5 is a flowchart of a process for updating a data structure 300 based on I/O requests associated with one or more data blocks 304, according to one embodiment. In act 500, access application 306 obtains a global read-write lock for data structure 300. If a read-write lock is not available, application 306 may wait to perform the update until the lock becomes available.
In act 502, the application 110 traverses the data structure 300 to search for the node 302 that contains the data block 304 corresponding to the I/O request.
In act 504, application 110 determines whether such a data block exists. If the answer is yes, the application updates the corresponding node in act 506. The update may include, for example, a placement location, a timestamp, etc.
If the data block does not exist, application 110 adds a new data block having a range interval corresponding to the I/O request in act 508. Placement of new data blocks in one of the storage devices 112 may depend on the placement algorithm performed by the I/O placement engine 204.
In act 510, the update of data structure 300 is completed and application 110 releases the global read-write lock.
FIG. 6 is a flow diagram of a process for dynamic I/O placement, according to one embodiment. This process may be performed, for example, by the I/O placement engine 204 on a periodic basis called an epoch. Each epoch may be, for example, a one second interval.
In act 600, the I/O placement engine 204 collects aggregate I/O throughput for one or more I/O threads (e.g., the processing thread 306). For example, the I/O placement engine 204 may count the number of I/O requests for one or more I/O threads during the current epoch. One or more I/O threads may be assigned to issue I/O requests to one or more of the different storage devices 112. For example, a first one of application threads 212a (fig. 3) may be assigned to issue I/O requests to relatively fast storage such as, for example, an NVM device, while a second one of application threads 212b may be assigned to issue I/O requests to relatively slow storage such as, for example, an NVMe SSD device.
In act 602, the I/O placement engine 204 compares the current throughput of one or more I/O threads to the previous throughput. For example, the I/O placement engine 204 may compare the throughput of one of the I/O threads in the current epoch to the throughput of I/O threads in one or more previous epochs.
In act 604, the I/O placement engine 204 determines whether the current throughput of one of the I/O threads is lower than the previous throughput. If the answer is yes, the I/O placement engine 204 reassigns the thread to another storage device. For example, if an I/O thread assigned to a relatively faster storage device is substantially inactive when an I/O request is issued, the I/O thread may be reassigned to a relatively slower storage device. In another example, an I/O thread assigned to a relatively slower storage device may be reassigned to a relatively faster storage device if the I/O thread is substantially active when the I/O request is issued. This may help maximize, for example, the memory write bandwidth of the different memory devices 112.
FIG. 7 is a flowchart of a process for dynamically allocating CPU resources to I/O threads, according to one embodiment. In one embodiment, the CPU resources are divided into two CPU groups, but the embodiments are not limited thereto. For example, the CPU resources may be divided into a first (e.g., fast) CPU group and a second (e.g., slow) CPU group.
In act 700, the I/O placement engine 204 calculates CPU utilization for one or more I/O threads.
In act 702, the I/O placement engine 204 assigns one or more I/O threads to one of the CPU groups. For example, the I/O placement engine 204 may assign I/O threads assigned to a first type of storage (e.g., fast storage) to a first (e.g., fast) CPU group and I/O threads assigned to a second type of storage (e.g., slow storage) to a second (e.g., slow) CPU group.
In act 704, it is determined whether there are sufficient CPU resources in the CPU group to be allocated to the I/O thread. For example, the I/O placement engine 204 may determine whether there are more threads assigned to the first storage type that require CPU resources from the first CPU group. If the answer is yes, then I/O placement engine 204 borrows one or more CPU resources from the second CPU group to be allocated to the thread requiring CPU resources in act 706. In one embodiment, borrowing is possible as long as a threshold minimum amount of CPU resources remain in the second CPU group.
In some embodiments, the systems and methods for managing I/O requests in a heterogeneous storage system discussed above are implemented in one or more processors. The term processor may refer to one or more processors and/or one or more processing cores. The one or more processors may be hosted in a single device or distributed across multiple devices (e.g., on a cloud system). Processors may include, for example, application Specific Integrated Circuits (ASICs), general purpose or special purpose Central Processing Units (CPUs), digital Signal Processors (DSPs), graphics Processing Units (GPUs), and programmable logic devices such as Field Programmable Gate Arrays (FPGAs). In a processor, as used herein, each function is performed by hardware configured to be, i.e., hardwired to perform the function, or by more general purpose hardware, such as a CPU, configured to execute instructions stored in a non-transitory storage medium (e.g., memory). The processor may be fabricated on a single Printed Circuit Board (PCB) or distributed over several interconnected PCBs. The processor may contain other processing circuitry; for example, the processing circuitry may include two processing circuits interconnected on a PCB, an FPGA and a CPU.
It will be understood that, although the terms "first," "second," "third," etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first element, component, region, layer or section discussed herein could be termed a second element, component, region, layer or section without departing from the spirit and scope of the present inventive concept.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the inventive concepts. Moreover, unless explicitly stated, the embodiments described herein are not mutually exclusive. Aspects of the embodiments described herein may be combined in some implementations.
With respect to the flowcharts of fig. 4-7, it should be appreciated that the order of the steps of the processes of these flowcharts is not fixed, but rather may be modified, changed in order, performed differently, performed sequentially, concurrently or simultaneously, or changed to any desired order as would be recognized by one skilled in the art.
As used herein, the terms "substantially," "about," and the like are used as approximate terms, rather than degree terms, and are intended to account for inherent deviations in measured or calculated values that would be recognized by one of ordinary skill in the art.
As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items. An expression such as "at least one of … …" when following a list of elements, modifies the entire list of elements without modifying the individual elements of the list. Furthermore, the use of "may" in describing embodiments of the inventive concepts refers to "one or more embodiments of the present disclosure. Furthermore, the term "exemplary" is intended to refer to an example or illustration. As used herein, the terms "use," "in use," and "used" may be considered synonymous with the terms "utilized," "in use," and "used," respectively.
It will be understood that when an element or layer is referred to as being "on," "connected to," "coupled to" or "adjacent to" another element or layer, it can be directly on, connected to, coupled to or adjacent to the other element or layer, or one or more intervening elements or layers may be present. In contrast, when an element or layer is referred to as being "directly on," "directly connected to," "directly coupled to," or "directly adjacent to" another element or layer, there are no intervening elements or layers present.
Any numerical range recited herein is intended to include all sub-ranges subsumed with the same numerical precision within that range. For example, a range of "1.0 to 10.0" is intended to include all subranges between (and inclusive of) the minimum value of 1.0 and the maximum value of 10.0, i.e., having a minimum value equal to or greater than 1.0 and a maximum value equal to or less than 10.0, e.g., 2.4 to 7.6. Any maximum numerical limitation described herein is intended to include all lower numerical limitations subsumed therein, and any minimum numerical limitation described herein is intended to include all higher numerical limitations subsumed therein.
While exemplary embodiments of systems and methods for managing I/O requests in a heterogeneous storage system have been specifically described and illustrated herein, many modifications and variations will be apparent to those skilled in the art. Thus, it should be appreciated that systems and methods for managing I/O requests in a heterogeneous storage system constructed in accordance with the principles of the present disclosure may be embodied in a manner different from that specifically described herein. The disclosure is also defined in the appended claims and equivalents thereof.
Systems and methods for managing I/O requests in a heterogeneous storage system may include one or more combinations of features set forth in the following claims.
Statement 1: a method for managing a storage system comprising a first storage device and a second storage device different from the first storage device, the method comprising: receiving a first storage operation for a first portion of a file; identifying a data structure associated with the file; identifying a first storage device for a first portion of a file based on a data structure; and transmitting the first storage operation to the first storage device, wherein the first storage device updates or accesses the first portion of the file in response to the first storage operation.
Statement 2: in the method of claim 1, the data structure comprises a tree data structure having a first node and a second node, wherein the first node comprises first information about a first portion of the file and the second node comprises second information about a second portion of the file, wherein the second portion of the file is stored in the second storage device.
Statement 3: in the method of one of claims 1 or 2, the first information identifies first metadata in a first file system and the second information identifies second metadata in a second file system different from the first file system.
Statement 4: in the method of one of claims 1-3, the file is a logical file in a virtual namespace, wherein the logical file extracts the first file system and the second file system from the application.
Statement 5: in the method of one of claims 1-4, a first storage operation is directed to a virtual namespace, and the method comprises: receiving a second storage operation directed to the virtual namespace; identifying a second storage device; and transmitting the second storage operation to the second storage device.
Statement 6: in the method of one of claims 1-5, the method further comprising: receiving a second storage operation for a second portion of the file; identifying a data structure associated with the file; identifying a second storage device for a second portion of the file based on the data structure; and transmitting the second storage operation to the second storage device, wherein the second storage device updates the second portion of the file concurrently with the first storage device updating the first portion of the file in response to the second storage operation.
Statement 7: in the method of one of claims 1-6, the method further comprising: identifying a first processing thread assigned to a first storage device; determining a throughput of a first processing thread; and reassigning the first processing thread to the second storage device based on the throughput.
Statement 8: in the method of one of claims 1-7, the method further comprising: calculating the utilization of processing resources by the first processing thread; determining availability of processing resources in a first processing group; and borrowing processing resources from the second processing group in response to the determination.
Statement 9: in the method of one of claims 1-8, the first portion of the file comprises a first data block of the file and the second portion of the file comprises a second data block of the file.
Statement 10: the method of one of claims 1-9, wherein the first storage device is a non-volatile memory device and the second storage device is a solid state drive.
Statement 11: a system for managing a storage system, the storage system comprising a first storage device and a second storage device different from the first storage device, the system comprising: a processor; and a memory, wherein the memory stores instructions that, when executed by the processor, cause the processor to: receiving a first storage operation for a first portion of a file; identifying a data structure associated with the file; identifying a first storage device for a first portion of a file based on a data structure; and transmitting the first storage operation to the first storage device, wherein the first storage device updates or accesses the first portion of the file in response to the first storage operation.
Statement 12: in the system of claim 11, the data structure comprising a tree data structure having a first node and a second node, wherein the first node comprises first information about a first portion of the file and the second node comprises second information about a second portion of the file, wherein the second portion of the file is stored in the second storage device.
Statement 13: in the system of one of claims 11 or 12, the first information identifies first metadata in a first file system and the second information identifies second metadata in a second file system different from the first file system.
Statement 14: in the system of one of claims 11-13, the file is a logical file in a virtual namespace, wherein the logical file extracts the first file system and the second file system from the application.
Statement 15: in the system of one of claims 11-14, the first store operation is directed to a virtual namespace, and the instructions further cause the processor to: receiving a second storage operation directed to the virtual namespace; identifying a second storage device; and transmitting the second storage operation to the second storage device.
Statement 16: in the system of one of claims 11-15, the instructions further cause the processor to: receiving a second storage operation for a second portion of the file; identifying a data structure associated with the file; identifying a second storage device for a second portion of the file based on the data structure; and transmitting the second storage operation to the second storage device, wherein the second storage device updates the second portion of the file concurrently with the first storage device updating the first portion of the file in response to the second storage operation.
Statement 17: in the system of one of claims 11-16, the instructions further cause the processor to: identifying a first processing thread assigned to a first storage device; determining a throughput of a first processing thread; and reassigning the first processing thread to the second storage device based on the throughput.
Statement 18: in the system of one of claims 11-17, the instructions further cause the processor to: calculating the utilization of processing resources by the first processing thread; determining availability of processing resources in a first processing group; and borrowing processing resources from the second processing group in response to the instruction causing the processor to determine availability.
Statement 19: in the system of one of claims 11-18, a first portion of the file comprises a first data block of the file and a second portion of the file comprises a second data block of the file.
Statement 20: the system of any of claims 11-19, wherein the first storage device is a non-volatile memory device and the second storage device is a solid state drive.

Claims (20)

1. A method for managing a storage system comprising a first storage device and a second storage device different from the first storage device, the method comprising:
receiving a first storage operation for a first portion of a file;
identifying a data structure associated with the file;
identifying a first storage device for a first portion of a file based on a data structure; and
the first storage operation is sent to the first storage device, wherein the first storage device updates or accesses the first portion of the file in response to the first storage operation.
2. The method of claim 1, wherein the data structure comprises a tree data structure having a first node and a second node, wherein the first node comprises first information about a first portion of the file and the second node comprises second information about a second portion of the file, wherein the second portion of the file is stored in the second storage device.
3. The method of claim 2, wherein the first information identifies first metadata in a first file system and the second information identifies second metadata in a second file system different from the first file system.
4. A method according to claim 3, wherein the file is a logical file in a virtual namespace, wherein the logical file extracts the first file system and the second file system from the application.
5. The method of claim 4, wherein the first storage operation is directed to a virtual namespace, the method comprising:
receiving a second storage operation directed to the virtual namespace;
identifying a second storage device; and
the second storage operation is sent to the second storage device.
6. The method of claim 2, further comprising:
receiving a second storage operation for a second portion of the file;
identifying a data structure associated with the file;
identifying a second storage device for a second portion of the file based on the data structure; and
the second storage operation is sent to the second storage device, wherein in response to the second storage operation, the second storage device updates the second portion of the file simultaneously with the first storage device updating the first portion of the file.
7. The method of claim 6, further comprising:
identifying a first processing thread assigned to a first storage device;
determining a throughput of a first processing thread; and
the first processing thread is reassigned to the second storage device based on the throughput.
8. The method of claim 7, further comprising:
calculating the utilization of processing resources by the first processing thread;
determining availability of processing resources in a first processing group; and
processing resources are borrowed from the second processing group in response to the determination.
9. The method of claim 2, wherein the first portion of the file comprises a first data block of the file and the second portion of the file comprises a second data block of the file.
10. The method of claim 2, wherein the first storage device is a non-volatile memory device and the second storage device is a solid state drive.
11. A system for managing a storage system, the storage system comprising a first storage device and a second storage device different from the first storage device, the system comprising:
a processor; and
a memory, wherein the memory stores instructions that, when executed by the processor, cause the processor to:
Receiving a first storage operation for a first portion of a file;
identifying a data structure associated with the file;
identifying a first storage device for a first portion of a file based on a data structure; and
the first storage operation is sent to the first storage device, wherein the first storage device updates or accesses the first portion of the file in response to the first storage operation.
12. The system of claim 11, wherein the data structure comprises a tree data structure having a first node and a second node, wherein the first node comprises first information about a first portion of the file and the second node comprises second information about a second portion of the file, wherein the second portion of the file is stored in the second storage device.
13. The system of claim 12, wherein the first information identifies first metadata in a first file system and the second information identifies second metadata in a second file system different from the first file system.
14. The system of claim 13, wherein the file is a logical file in a virtual namespace, wherein the logical file extracts the first file system and the second file system from the application.
15. The system of claim 14, wherein the first storage operation is directed to a virtual namespace, wherein the instructions further cause the processor to:
receiving a second storage operation directed to the virtual namespace;
identifying a second storage device; and
the second storage operation is sent to the second storage device.
16. The system of claim 12, wherein the instructions further cause the processor to:
receiving a second storage operation for a second portion of the file;
identifying a data structure associated with the file;
identifying a second storage device for a second portion of the file based on the data structure; and
the second storage operation is sent to the second storage device, wherein in response to the second storage operation, the second storage device updates the second portion of the file simultaneously with the first storage device updating the first portion of the file.
17. The system of claim 16, wherein the instructions further cause the processor to:
identifying a first processing thread assigned to a first storage device;
determining a throughput of a first processing thread; and
the first processing thread is reassigned to the second storage device based on the throughput.
18. The system of claim 17, wherein the instructions further cause the processor to:
Calculating the utilization of processing resources by the first processing thread;
determining availability of processing resources in a first processing group; and
processing resources are borrowed from the second processing group in response to the instruction causing the processor to determine availability.
19. The system of claim 12, wherein the first portion of the file comprises a first data block of the file and the second portion of the file comprises a second data block of the file.
20. The system of claim 12, wherein the first storage device is a non-volatile memory device and the second storage device is a solid state drive.
CN202310200880.6A 2022-03-03 2023-03-03 System and method for heterogeneous storage systems Pending CN116700605A (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US63/316,403 2022-03-03
US63/350,818 2022-06-09
US63/355,377 2022-06-24
US17/900,830 2022-08-31
US17/900,830 US11928336B2 (en) 2022-03-03 2022-08-31 Systems and methods for heterogeneous storage systems

Publications (1)

Publication Number Publication Date
CN116700605A true CN116700605A (en) 2023-09-05

Family

ID=87844063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310200880.6A Pending CN116700605A (en) 2022-03-03 2023-03-03 System and method for heterogeneous storage systems

Country Status (1)

Country Link
CN (1) CN116700605A (en)

Similar Documents

Publication Publication Date Title
CN108780406B (en) Memory sharing working data using RDMA
US10552337B2 (en) Memory management and device
CN110795206B (en) System and method for facilitating cluster-level caching and memory space
US11593186B2 (en) Multi-level caching to deploy local volatile memory, local persistent memory, and remote persistent memory
US11163452B2 (en) Workload based device access
US10909072B2 (en) Key value store snapshot in a distributed memory object architecture
CN114860163B (en) Storage system, memory management method and management node
EP2645259B1 (en) Method, device and system for caching data in multi-node system
US10802972B2 (en) Distributed memory object apparatus and method enabling memory-speed data access for memory and storage semantics
KR20120068454A (en) Apparatus for processing remote page fault and method thereof
US9104583B2 (en) On demand allocation of cache buffer slots
US10802748B2 (en) Cost-effective deployments of a PMEM-based DMO system
Guo et al. HP-mapper: A high performance storage driver for docker containers
EP4239462A1 (en) Systems and methods for heterogeneous storage systems
JP2024527054A (en) Dynamically allocatable physically addressed metadata storage - Patents.com
CN116700605A (en) System and method for heterogeneous storage systems
US20220318042A1 (en) Distributed memory block device storage
Kim et al. Understanding the performance of storage class memory file systems in the NUMA architecture
Wagle et al. NUMA-aware memory management with in-memory databases
US10846007B2 (en) Shuffle manager in a distributed memory object architecture
WO2020024591A1 (en) Cost-effective deployments of a pmem-based dmo system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication