WO2015010394A1 - 数据发送方法、数据接收方法和存储设备 - Google Patents
数据发送方法、数据接收方法和存储设备 Download PDFInfo
- Publication number
- WO2015010394A1 WO2015010394A1 PCT/CN2013/087229 CN2013087229W WO2015010394A1 WO 2015010394 A1 WO2015010394 A1 WO 2015010394A1 CN 2013087229 W CN2013087229 W CN 2013087229W WO 2015010394 A1 WO2015010394 A1 WO 2015010394A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- storage device
- address information
- written
- time slice
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 78
- 230000010076 replication Effects 0.000 claims abstract description 112
- 230000001960 triggered effect Effects 0.000 claims description 51
- 238000004891 communication Methods 0.000 claims description 30
- 238000011084 recovery Methods 0.000 description 86
- 238000004519 manufacturing process Methods 0.000 description 75
- 238000010586 diagram Methods 0.000 description 15
- 230000008569 process Effects 0.000 description 12
- 238000003491 array Methods 0.000 description 11
- 238000012545 processing Methods 0.000 description 11
- 230000005540 biological transmission Effects 0.000 description 6
- 230000004048 modification Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 239000000835 fiber Substances 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000011022 operating instruction Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000001568 sexual effect Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2094—Redundant storage or storage space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2064—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring while ensuring consistency
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/065—Replication mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1448—Management of the data involved in backup or backup restore
- G06F11/1451—Management of the data involved in backup or backup restore by selection of backup contents
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1458—Management of the backup or restore process
- G06F11/1464—Management of the backup or restore process for networked environments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2071—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
- G06F11/2074—Asynchronous techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2097—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements maintaining the standby controller/processing unit updated
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/82—Solving problems relating to consistency
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/84—Using snapshots, i.e. a logical point-in-time copy of the data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/885—Monitoring specific for caches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1032—Reliability improvement, data loss prevention, degraded operation etc
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/46—Caching storage objects of specific type in disk cache
- G06F2212/466—Metadata, control data
Definitions
- BACKGROUND Data disaster tolerance also known as remote data replication technology, refers to the establishment of an off-site data system, which is an available copy of local data. In the event of a disaster in local data and the entire application system, the system maintains at least one copy of the critical business data available in the field.
- a typical data disaster recovery system includes a production center and a disaster recovery center.
- hosts and storage arrays are deployed for normal service operations.
- hosts and storage arrays are deployed to take over their services after a disaster occurs in the production center.
- the storage array of the production center or the disaster recovery center includes multiple data volumes, and the data volume is a logical storage space mapped by physical storage space. After the data generated by the service of the production center is written to the production array, it can be copied to the disaster recovery center through the DR link and written to the disaster recovery array. To ensure that the data of the disaster recovery center can support the service takeover after the disaster occurs, the data copied to the disaster recovery array must ensure consistency.
- Assuring data consistency essentially means that there is a dependency write request, and the dependency needs to be guaranteed.
- Applications, operating systems, and databases all rely on this logic of writing data request dependencies to run their services, for example: first write data request 1 and then write data request 2, the order is fixed. In other words, the system will ensure that the write data request 1 is completely returned successfully before the write data is sent. Request 2. Therefore, it is possible to rely on an inherent method to recover the service when a failure causes the execution process to be interrupted. Otherwise, such a situation may occur, for example: When reading data, the data stored in the write data request 2 can be read, but the data stored in the write data request 1 cannot be read, which causes the service to be unrecoverable.
- a snapshot is an image of data at a point in time (the point in time when the copy begins).
- the purpose of the snapshot is to create a state view for the data volume at a specific point in time. Only the data of the data volume at the time of creation can be seen through this view. After this time point, the data volume is modified (new data is written). Will not be reflected in the snapshot view. With this snapshot view, you can copy the data.
- the production center since the snapshot data is "stationary", the production center can copy the snapshot data to the disaster recovery center after the snapshot data is added to the time point, and the remote data replication can be completed. The effect continues to execute write data requests at the production center.
- data consistency requirements can also be met. For example, the data of the data request 2 is successfully copied to the disaster recovery center, and the data of the data request 1 is not successfully copied. The data of the disaster recovery center can be restored to the previous state by using the snapshot data before the data request 2 is written.
- the production center performs the snapshot processing when the data request is executed, the generated snapshot data is saved in the data volume dedicated to the storage of the snapshot data. Therefore, when the production center copies the snapshot data to the disaster recovery center, it needs to first The snapshot data stored in the data volume is read into the cache and then sent to the disaster recovery center. However, the data used to generate the snapshot data may still exist in the cache, but this part of the data cannot be reasonably utilized. Each copy needs to read the snapshot data in the data volume first, resulting in longer data replication and lower efficiency. . Summary of the invention
- the embodiment of the invention provides a data sending method, which can directly send the information carried by the write data request to the second storage device from the cache of the first storage device, thereby improving the efficiency of data copying.
- a first aspect of the embodiments of the present invention provides a data sending method, including: receiving, by a first storage device, a first write data request sent by a host, where the first write data request carries data to be written and address information;
- the first number is used to identify a current replication task, and the method further includes:
- the second number is recorded, and the second number is the number corresponding to the most recently completed copy task before the current copy task.
- the second possible implementation manner of the first aspect further includes:
- the number before the first number corresponds to the data to be written and the address information
- the data to be written and the address information corresponding to the number before the first number are sent to the second storage device.
- a third possible implementation manner of the first aspect of the embodiments of the present invention further includes: recording a current time slice number, where the current time slice number is used to generate the first number.
- a second aspect of the embodiments of the present invention provides a data receiving method, including:
- the second storage device receives the address information sent by the first storage device
- the second storage device acquires data to be written corresponding to the first number according to the address information, and the address information corresponding to the first number and the received address.
- the information is the same, the first number is a number before the current time slice number; the data to be written corresponding to the first number is added to the second number, and the writing is slow Save.
- the method further includes: recording the current time slice number, where the current time slice number is used to generate the second number.
- the method further includes: receiving a read data request sent by the host, where the read data request includes the received address information;
- a third aspect of the embodiments of the present invention provides a storage device, including:
- a receiving module configured to receive a first write data request sent by the host, where the first write data request carries data to be written and address information;
- a read/write module configured to add the first number to the data to be written and the address information, and write the cache, where the first number is a current time slice number; and the first number corresponding to the cache is read from the cache The data to be written and the address information;
- a current time slice number manager configured to modify the current time slice number to identify information carried by the subsequent write data request
- a sending module configured to send the to-be-written data and address information to the second storage device.
- the first number is used to identify a current replication task.
- the current time slice number manager is further configured to record a second number, where the second number is a number corresponding to the most recently completed copy task before the current copy task.
- the read/write module is further configured to read the second number from the cache. After that, the number before the first number corresponds to the data to be written and the address information;
- the sending module is further configured to send the data to be written and the address information corresponding to the number before the first number to the second storage device after the second number.
- the current time slice number manager is further configured to record a current time slice number, where the current time slice number is used to generate the first number .
- a fourth aspect of the embodiments of the present invention provides a storage device, including:
- a receiving module configured to receive address information sent by the first storage device
- a locating module configured to: when determining that the first storage device is faulty, the second storage device acquires data to be written corresponding to the first number according to the address information, and the address information corresponding to the first number The received address information is the same, and the first number is a number before the current time slice number;
- a write module configured to add a second number to the data to be written and the address information corresponding to the first number, and write the cache.
- the method further includes: a current time slice number manager, configured to record the current time slice number, where the current time slice number is used to generate the Two numbers.
- the receiving module is further configured to receive a read data request sent by a host, where the read data request includes the received address information;
- the searching module is further configured to determine that the latest number corresponding to the received address information is the second number
- the storage device further includes a sending module, where the sending module is configured to send data to be written corresponding to the second number to the host.
- a fifth aspect of the embodiments of the present invention provides a storage device, including: a processor, a memory, and a communication bus;
- processor and the memory communicate via the communication bus
- the memory is used to save a program
- the processor is configured to execute the program to: Receiving a first write data request sent by the host, the first write data request carrying the data to be written and the address information; adding the first number to the data to be written and the address information, and writing the cache, wherein the first The number is the current time slice number; the data to be written and the address information corresponding to the first number are read from the cache; and the current time slice number is modified to identify information carried by the subsequent write data request; The write data and the address information are sent to the second storage device.
- the first number is used to identify a current replication task, and the processor is further configured to:
- the second number is recorded, and the second number is the number corresponding to the most recently completed copy task before the current copy task.
- the processor is further configured to: after reading the second number from the cache The data before the first number corresponds to the data to be written and the address information; after the second number, the data to be written and the address information corresponding to the number before the first number are sent to the second Storage device.
- the processor is further configured to: record a current time slice number, where the current time slice number is used to generate the first number.
- a sixth aspect of the embodiments of the present invention provides a storage device, including: a processor, a memory, and a communication bus;
- processor and the memory communicate via the communication bus
- the memory is used to save a program
- the processor is configured to execute the program to:
- the second storage device acquires data to be written corresponding to the first number according to the address information, and the address information corresponding to the first number and the received address.
- the information is the same, and the first number is a number before the current time slice number; Adding a second number to the data to be written and the address information corresponding to the first number, and writing the buffer.
- the processor is further configured to record the current time slice number, where the current time slice number is used to generate the second number.
- the processor is further configured to receive a read data request sent by the host, where the read data request includes the received address information; The latest number corresponding to the received address information is the second number; and the data to be written corresponding to the second number is sent to the host.
- a seventh aspect of the embodiments of the present invention provides a data replication method, including:
- the first storage device reads the current time slice number when the current replication task is triggered
- Reading a second number where the second number is a number corresponding to the most recently completed copy task associated with the current copy task;
- the replication task associated with the current replication task refers to a replication task that belongs to a replication relationship with the current replication task
- the method further includes: receiving an identifier corresponding to the replication relationship; and the reading the second number includes:
- the method before the triggering of the current replication task, the method further includes: Receiving a first write data request, where the first write data request includes the to-be-copied data and address information of the to-be-copied data;
- the current time slice number is falsified by the historical time slice number.
- An eighth aspect of the embodiments of the present invention provides a storage device, including:
- a reading and writing module configured to read a current time slice number when the current copy task is triggered; and read a second number, where the second number is a most recently completed copy task associated with the current copy task Number
- a determining module configured to determine, according to the current time slice number and the second number, a first number, where the first number is a number before a current time slice number when the current copy task is triggered, and the One number is the number after the second number;
- a copying module configured to copy, to the second storage device, the to-be-copied data and the address information of the to-be-copied data saved in the cache corresponding to the first number.
- the replication task associated with the current replication task refers to a replication task that belongs to a replication relationship with the current replication task
- the storage device further includes: a receiving module
- the receiving module is configured to receive an identifier corresponding to the replication relationship
- the read/write module is specifically configured to read the second number according to the identifier.
- the receiving module is further configured to: before the current replication task is triggered, receive a first write data request, where the first write data request includes the to-be-supplied Copying data and address information of the data to be copied;
- the read/write module is further configured to: address the data to be copied and the address to be copied Adding the first number to the cache, the first number is a historical time slice number.
- the current time slice number is falsified by the historical time slice number. .
- a ninth aspect of the embodiments of the present invention provides a storage device, including: a processor, a memory, and a communication bus;
- processor and the memory communicate via the communication bus
- the memory is used to save a program
- the processor is configured to execute the program, to implement the method according to any one of the seventh aspects of the embodiments of the present invention.
- the information carried by the write data request includes data to be written and address information, and the first number is added to the data to be written and the address information.
- the write buffer, the first number is the current time slice number, and when the copy task is triggered, the data to be written corresponding to the first number and the address information are read from the cache, and sent to the second storage device, and in addition, in the copy task
- the current time slice number is modified, so that when the first storage device receives the write data request, the first storage device adds the same number to the modified current time slice number in the information carried by the first storage device, and thus needs to be sent to the cache in the cache.
- the information carried by the write data request of the second storage device is separated from the information carried by the write data request that the first storage device is receiving, and the information carried by the write data request is directly sent from the cache to the second storage device. Since the information is sent directly from the cache, there is no need to read the data from the data volume, so the data is complex Less time, improve the efficiency of data replication.
- FIG. 1 is a schematic diagram of an application network architecture of a data sending method according to an embodiment of the present invention
- FIG. 2 is a flowchart of a data sending method according to an embodiment of the present invention
- FIG. 3 is a flowchart of a data receiving method according to an embodiment of the present invention.
- FIG. 4 is a signaling diagram of a data sending method according to an embodiment of the present invention.
- FIG. 5 is a schematic structural diagram of a storage device according to an embodiment of the present disclosure.
- FIG. 6 is a schematic structural diagram of another storage device according to an embodiment of the present disclosure.
- FIG. 7 is a schematic structural diagram of still another storage device according to an embodiment of the present disclosure.
- FIG. 8 is a schematic structural diagram of still another storage device according to an embodiment of the present disclosure.
- FIG. 9 is a schematic diagram of an application network architecture of a data replication method according to an embodiment of the present invention.
- FIG. 10 is a flowchart of a data replication method according to an embodiment of the present invention.
- FIG. 11 is a schematic structural diagram of still another storage device according to an embodiment of the present disclosure.
- FIG. 12 is a schematic structural diagram of still another storage device according to an embodiment of the present invention. detailed description
- FIG. 1 A schematic diagram of a system architecture of a data transmission method provided by the embodiment, as shown in FIG. 1, the production center includes a production host, a connection device, and a production array (corresponding to the first storage device of the following embodiment); system architecture and production of the disaster recovery center The center is similar, including the disaster recovery host, the connection device, and the disaster recovery array (corresponding to the second storage device in the following embodiment).
- the production center and the disaster recovery center can transmit data through IP (Internet Protocol) or FC (Fiber Chanel).
- IP Internet Protocol
- FC Fiber Chanel
- the control center can be deployed on the production center side or on the disaster recovery center side. It can also be deployed in a third-party device between the production center and the disaster recovery center.
- the control center is configured to send a signal to the disaster recovery array to take over the production array to process the host service when the production array fails.
- Both the production host and the disaster recovery host can be any computing device known in the art, such as servers, desktop computers, and the like. Inside the host, an operating system and other applications are installed.
- connection device can include any interface between the storage device known to the prior art and the host, such as a fiber switch, or other existing switch.
- each array and disaster recovery may be the current techniques known storage device, such as a Redundant Array of Independent Disks bad 1 J (Redundant Arrays of Inexpensive Disks , RAID), disk H (Just a Bunch Of Disks, JBOD), Direct Memory One or more interconnected disk drives of a Direct Access Storage Device (DASD), such as a tape library, one or more storage units of tape storage devices.
- a Redundant Array of Independent Disks bad 1 J Redundant Arrays of Inexpensive Disks , RAID
- disk H Just a Bunch Of Disks, JBOD
- Direct Memory One or more interconnected disk drives of a Direct Access Storage Device (DASD), such as a tape library, one or more storage units of tape storage devices.
- DASD Direct Access Storage Device
- the storage space of the production array may include multiple data volumes.
- the data volume is a logical storage space mapped by physical storage space.
- the data volume may be a Logic Unit Number (LUN) or a file system.
- LUN Logic Unit Number
- the structure of the disaster recovery array is similar to the production array.
- FIG. 1 is an embodiment of a data transmission method according to an embodiment of the present invention.
- the first storage device includes a controller and a cache memory (hereinafter referred to as a cache or Cache ) and storage media.
- the controller is a processor of the first storage device, configured to execute 10 commands and other data services;
- the cache is a memory existing between the controller and the hard disk, and the capacity is smaller than the hard disk but the speed is much higher than the hard disk;
- the storage medium It is a main memory of the first storage device, and is generally referred to as a non-volatile storage medium, for example, a magnetic disk.
- the physical storage space included in the first storage device is referred to as a storage medium. Specifically performing the following steps may be a controller in the first storage device.
- Step S101 The first storage device receives a first write data request sent by the host, where the first The write data request carries the data to be written and the address information.
- the address information may include a logical unit address (LBA).
- LBA logical unit address
- the address information may further include an ID of the data volume of the first storage device.
- Step S102 Add the first number to the data to be written and the address information, and write the buffer to the cache, where the first number is the current time slice number.
- the current time slice number manager may be included in the current time slice number manager, and the current time slice number may be represented by a numerical value, such as 0, 1, 2, or It is represented by letters, such as a, b, c, which are not limited here.
- the information carried in the modified first write data request is written into the cache, so that the first write data request carries the data to be written, the address information, and the first A number is saved in the cache.
- write data requests can be received for a period of time. It is also necessary to add a first number to the information carried by it and write it to the cache. It should be noted that before the current time slice number is changed, the first number is added to the information carried in the write data request.
- Step S103 Read the to-be-written data and address information corresponding to the first number from the cache.
- the first storage device may read the data to be written and the address information corresponding to the first number from the cache. It may be understood that the data to be written and the address information corresponding to the first number may be more than One.
- the copying task is that the first storage device sends the information carried by the write data request received by one data volume to the second storage device for a period of time, and the information carried by the write data request is added with the same number as the current time slice number.
- the replication task trigger can be touched by a timer Hair, can also be artificial trigger, not limited here.
- the purpose of the replication is to send the data to be written carried by the write data request received by the first storage device to the second storage device for storage, so that when the first storage device fails, the second storage device can take over the operation of the first storage device.
- the address information (for example, LBA) carried by the write data request also needs to be sent to the second storage device, and the LBA is used to instruct the second storage device to store the address of the data to be written. Since the second storage device has the same physical structure as the first storage device, the LBA applicable to the first storage device is also applicable to the second storage device.
- LBA address information
- the copy task is for one data volume of the first storage device, and when the first storage device includes multiple data volumes, one copy task corresponding to each data volume.
- Step S104 Modify the current time slice number to identify information carried by the subsequent write data request.
- the current time slice number manager needs to modify the current time slice number.
- the information carried by the subsequent write data request needs to be added with another number, and the other number is It is assigned to it by the modified current time slice number.
- the information carried by the write data request that needs to be sent to the second storage device can be distinguished from the information carried by the write data request that the first storage device is receiving in the cache.
- step S103 there is no order between step S103 and step S104.
- Step S105 Send the data to be written and the address information to the second storage device.
- the first storage device sends the data to be written and the address information corresponding to the first number read from the cache to the second device.
- the first storage device may directly send all the to-be-written data and address information that is read to the second storage device; or after obtaining the ID of the data volume of the second storage device, according to each write data.
- the data to be written and the address information to be carried, and the ID of the data volume of the second storage device are respectively generated, and a new write data request is generated, and then sent to the second storage device.
- the information carried by the write data request includes data to be written and address information, where the data and the data to be written are to be written.
- the first number is added to the address information, and the first number is the current time slice number.
- the copy task is triggered, the data to be written corresponding to the first number is read from the cache, and the address information is sent to the second storage.
- the current time slice number is modified, so that when the first storage device receives the write data request, the first storage device adds the same number to the modified current time slice number in the information carried by the first storage device.
- the information carried by the write data request that needs to be sent to the second storage device is distinguished from the information carried by the write data request that the first storage device is receiving, and the information carried by the write data request directly from the cache is realized.
- Sending to the second storage device since the information is directly sent from the cache, there is no need to read data from the data volume, so the data copying time is shorter, and the efficiency of data copying is improved.
- the first storage device when the replication task is triggered, the first storage device sends the data to be written and the address information corresponding to the current time slice number to the second storage device, and simultaneously modifies the current time slice number. Identifies the information carried by the subsequent write data request.
- the next replication task is triggered, the data and address information to be written corresponding to the modified current time slice number is sent to the second storage device, and the current time slice number is modified again. It can be ensured that the first storage device completely transmits the information carried by the write data request it receives to the second storage device in batches.
- the storage device corresponding to the second disaster recovery center is the third device, and the first storage device also needs to send the information carried by the received data request to the third storage device. device.
- the current time slice number manager will modify the current time slice number. At this time, the number of the current time slice number assigned to the second storage device and the third storage device is modified. The number. However, the write data request carrying information corresponding to the number before the current time slice number modification has not been sent to the third storage device.
- Step S106 Record a second number, where the second number is a copy task corresponding to the last completed copy task before the current copy task. Numbering.
- the first number is the same as the current time slice number, and may be used to identify the current copy task.
- the current copy task refers to the first storage device carrying the write data request received by a data volume in the current time period. The information is sent to the second storage device, and the letters carried by the write data requests are all added with the same number as the current time slice number.
- the second number is the number corresponding to the most recently completed copy task before the current copy task.
- the current time slice number may be modified when a replication task is initiated to the storage devices of other disaster recovery centers. Therefore, the number corresponding to the last completed replication task needs to be recorded.
- step S 107 If there is another number between the second number and the first number, the information carried by the write data request corresponding to the number is not sent to the second storage device, and step S 107 needs to be performed.
- Step S107 After reading the second number from the cache, the number before the first number corresponds to the data to be written and the address information.
- step S103 The specific reading process is similar to step S103 and will not be described here.
- step S107 may be performed in sequence with step S103, or may be performed simultaneously.
- Step S108 After the second number, the data to be written and the address information corresponding to the number before the first number are sent to the second storage device.
- step S105 The specific sending process is similar to step S105, and details are not described herein again.
- FIG. 2 is an embodiment of a data receiving method according to the present invention.
- Step S201 The second storage device receives the address information sent by the first storage device.
- the second storage device may receive the data to be written and the address information sent by the first storage device, and may also receive the write data request sent by the first storage device, where the write data request includes the data to be written and the address.
- Information the address information may be a logical block address (LBA).
- LBA logical block address
- the address information may further include an ID of the data volume of the second storage device. Understandably, there can be more than one address information here.
- the second storage device After receiving the data to be written and the address information, the second storage device adds the same number as the current time slice number to the data to be written and the address information, and writes the cache so that the cache saves the same as the current time slice number. The number, the data to be written and the address information.
- the second storage device also includes a current time slice number manager, where the current time slice number manager stores the current time slice number, and the current time slice number can be represented by a numerical value, for example, 0, 1 2, can be expressed by letters, such as a, b, c, not limited here.
- the current time slice number here may not be associated with the current time slice number in the first storage device.
- Step S202 When it is determined that the first storage device is faulty, the second storage device acquires data to be written corresponding to the first number according to the address information, and the address information corresponding to the first number and the receiving The obtained address information is the same, and the first number is the number before the current time slice number.
- the second storage device can receive the information carried by the write data request, and The information carried by each write data request is added to the same number as the current time slice number and stored in the cache.
- the second storage device may only receive the current time slice of the first storage device. The corresponding part of the number is to be written to the data. In this case, the data saved by the second storage device may be unreal data. If the first storage device is directly replaced, the data consistency cannot be guaranteed.
- the second storage device searches for the latest number corresponding to the address information, and then The current time slice number corresponding to the write to be sent to the host, however, the data is not true. Therefore, at this time, it is necessary to restore the data in the cache of the second storage device to the data corresponding to the number before the current time slice number of the second storage device.
- the method for determining that the first storage device is faulty may be that the control center sends a signal to the second storage device, where the signal is used to indicate that the first storage device is faulty, and the second storage device needs to take over the first storage device to process the host service.
- the control center may send an indication of successful copying to the first storage device and the second storage device, respectively. If the second storage device does not receive the indication, then the current replication task is not completed.
- the completion of the replication task means that the first storage device sends the information carried by all the write data requests corresponding to the current time slice number to the second storage device, and the second storage device also receives the completion.
- the second storage device determines that the first storage device is faulty, if the current replication task is completed, the second storage device can directly take over the operation of the first storage device, and data consistency can be guaranteed. This situation is not within the scope of the embodiments of the present invention.
- the data in the cache of the second storage device needs to be restored to the data corresponding to the number before the current time slice number.
- the specific recovery method may be: searching for the same address information as the address information in the address information corresponding to the previous number of the current time slice number according to the received address information, and if not, continuing to The address information corresponding to the number is searched until the address information is found, and then the data to be written corresponding to the number is obtained.
- Step S203 Add the second number to be written data and address information corresponding to the first number, and write the buffer.
- the second number is a modified number of the current time slice number, and is also the latest number saved in the cache in this embodiment.
- the host sends a read data request to the second storage device to read the data saved on the address information
- the second storage device learns that the latest number corresponding to the address information is the second number,
- the data to be written corresponding to the second number is sent to the host. This ensures data consistency.
- the second storage device receives the address information sent by the first storage device, and when the first storage device fails, obtains the data to be written corresponding to the number before the current time slice number according to the address information, and The data to be written and the address information corresponding to the number before the current time slice number are incremented by the second number and stored in the cache.
- FIG. 3 is an embodiment of a data transmission method according to the present invention.
- the cache in the production array is referred to as the first cache
- the cache in the disaster recovery array is referred to as the second cache.
- the method includes:
- Step S301 The production array receives the write data request A sent by the production host.
- the write data request A includes a volume ID, a to-be-written address A, and a to-be-written data A
- the to-be-written address A refers to a logical address of a production array to be written to which data A is to be written, such as an LBA, usually
- the production array needs to convert the LBA into PBA (Physics Block Addres s) when executing the write data request A, and then write the data A to be written into the storage medium according to the PBA.
- Volume I D is the ID of the data volume corresponding to the write data request A.
- the production array includes a volume (hereinafter referred to as a primary volume) as an example, and the information carried by the write data request A includes the primary volume ID, the address to be written A, and the data to be written A.
- Step S302 The production array modifies the write data request A to the write data request A ' , and writes the data request A, including the information carried by the write data request A and the first number.
- the controller of the production array may include a current time slice.
- the current time slice number is recorded in the CTPN manager, and the current time slice number is used to generate the first number. Specifically, the first number is equal to the current time slice number.
- the write data request A is modified to write the data request A,.
- the modification may be performed by adding a first number to the information carried by the write data request A.
- the current time slice number may be 1, and the first number is also 1.
- the timestamp is recorded, and the timestamp is matched in a pre-saved number sequence to determine the number corresponding to the timestamp.
- the sequence of numbers may be a mapping table or other forms, which is not limited herein.
- the sequence of numbers includes a plurality of numbers, each number corresponding to an interval of a time stamp. As shown in Table 1:
- the write data request A can be modified to the write data request A according to the number.
- Step S303 The production array writes the write data request A' into the first cache, so that the information carried by the write data request A' is saved in the first cache.
- the information carried by the write data request A' includes the first number, the main volume ID, the address to be written A, and the data to be written A.
- the information carried by all the write data requests will be added with the first number.
- the write data request B can also be received, and the write data request B is modified to the write data request B′, so that the write data request B′ further includes the first number; Can receive write data request C, modify write data request C to write data request c ' , such that the first number is also included in the write data request c '.
- the saved information in the first cache can be as shown in Table 2:
- the production array includes a data volume (which may be referred to as a primary volume), and the write data request A, the write data request B, and the write data request C are carried.
- the ID of the data volume is the primary volume ID.
- the production array may contain a plurality of data volumes, so the I D of the data volume carried by the write data request, the write data request B, and the write data request C may be different.
- Table 2 is only an example of the format in which the information carried by the data request is stored in the first cache, and may be stored in the form of a tree, which is not limited herein.
- the number, the volume ID, and the address to be written can be regarded as the index of Table 2. According to the index, the corresponding data to be written can be found. When the index is the same, the corresponding data to be written is It should be the same. Therefore, when writing a new write data request, it is necessary to determine whether the first cache has the same information as the new write data request number, the volume ID, and the address to be written, and if so, use the new one. The information carried by the write data request overwrites the original information.
- the write data request D includes a primary volume.
- the ID, the address to be written B, the data to be written D, the write data request D is modified to the write data request D, so that the write data request D further includes the first number. Then, the write data request D, When the first cache is written, it is determined whether the first cache stores the same information as the write data request D', the volume ID, and the address to be written. If so, the data is requested by the write data D. Overwrite the original information. Since the number, volume ID, and address to be written carried in the write data request D' are the same as the number included in the write data request B, the volume ID, and the address to be written, in the first cache, the data request D is written. The information of ' will overwrite the information of the write data request B'.
- the information saved in the first cache may be as shown in Table 3:
- Step S304 When the copy task is triggered, the production array modifies the current time slice number included in the CTPN manager; for example, the current time slice number can be changed from 1 to 2.
- the current time slice number of the production array is referred to as the first current time slice number
- the disaster recovery array is used to distinguish the current time slice number of the production array from the current time slice number of the disaster recovery array.
- the current time slice number is referred to as the second current time slice number.
- the write data request E includes a primary volume ID, an address to be written A, a data to be written E, and a write data request E is modified to a write data request E′ such that the write data request E ' also contains the number 2; receives the write data request F, the write data request F contains the main volume ID, the address to be written F, the data to be written F, and the write data request F is modified to the write data request F ' Make the write data request F ' also contain the number 2.
- the information stored in the first cache may be as shown in Table 4: Numbered volume ID to be written to address to be written data
- Step S305 The disaster recovery array modifies the second current time slice number included in the CTPN manager; for example, it can be modified from 11 to 12.
- the disaster recovery array may also include its own CTPN manager.
- the CTPN manager in the production array modifies the first current time slice number.
- the control center can also send a control signal to the disaster recovery array, so that the disaster recovery array also modifies its own CTPN.
- the second current time slice number contained in the manager Therefore, there is no order between step S305 and step S304.
- Step S306A The production array reads the information carried by the write data request corresponding to the first number from the first cache.
- the information carried by the write data request corresponding to the first number is as shown in Table 3.
- Step S306B The production array obtains an ID of a data volume to be written into the disaster recovery array.
- Step S306C The production array generates a new write data request according to the ID of the data volume and the information carried by the write data request corresponding to the first number.
- the write data request A" may be generated according to the ID of the data volume, the address A to be written, and the data A to be written; the write data request may be generated according to the ID of the data volume, the address to be written B, and the data D to be written. D"; A write data request C" can be generated according to the ID of the data volume, the address to be written C, and the data to be written C.
- both the production array and the disaster recovery array may contain multiple data.
- Volume then write data request A,, write data request D,,, write data request C "The ID of the data volume contained may be different.
- the ID of each data volume in the disaster recovery array is related to production The ID of each data volume in the array - corresponding.
- Step S307 The production array sends the generated new write data request to the disaster recovery array.
- the production array sends a write data request A", writes a data request D", and writes a data request C" to the disaster recovery array.
- Step S308 The disaster recovery array modifies the received write data request.
- the write data request A can be modified to the write data request A, according to the second current time slice number recorded in the CTPN manager.
- the modification manner may be that the write data request A, Add number 12 to the information carried.
- the number 12 can be added to the information carried in the data request B", the write data request B can be modified to the write data request B,,; in the write data request C, the carried information is added with the number 12, Write data request C "modified to write data request C,".
- Step S309 The disaster recovery array writes the modified write data request to the second cache.
- the information stored in the second cache can be as shown in Table 5:
- Step S310 The disaster recovery array writes the data to be written into the storage medium corresponding to the address to be written according to the address to be written requested by the write data.
- Step S311 The production array writes the data to be written into the storage medium corresponding to the address to be written according to the address to be written requested by the write data.
- the cache of the production array needs to write the data in the cache to the hard disk when its space utilization reaches a certain threshold.
- the following information is stored in the first cache:
- the data to be written carried by the write data request with a smaller number may be written first, and the write data with a larger number is requested to be carried.
- the data to be written for example, the data to be written D is written first, and then the data to be written E is written; or the data to be written carried by the write data request with a larger number is directly written, without writing a smaller number.
- the write data request carries the data to be written, for example, directly writes the data E to be written.
- step S310 There is no order between step S310 and step S311.
- Step S312 When the copy task is triggered, the production array modifies the first current time slice number included in its CTPN manager; for example, the current time slice number can be changed from 2 to 3.
- the number 3 is added to the information carried by the write data request received by the production array.
- Step S313 The disaster recovery array modifies the second current time slice number included in the CTPN manager; for example, the second current time slice number can be modified from 12 to 13.
- the second current time slice number in the CTPN manager of the disaster recovery array After being modified from 12 to 13, correspondingly, the information carried by the write data request received by the disaster recovery array will be numbered 13.
- Step S314 The production array reads the information carried by the write data request corresponding to the number 2, and generates a corresponding write data request to send to the disaster recovery array.
- the information carried by the write data request corresponding to the number 2 includes the information carried by the write data request E and the information carried by the write data request F.
- the production array may generate a write data request E" according to the ID of the data volume, the address to be written A, and the data to be written E;; according to the ID of the data volume, The write address F and the write data F are generated to generate the write data request F. Therefore, the write data request sent by the production array to the disaster recovery array is the write data request E" and the write data request F".
- the production array when the production array sends a write data request to the disaster recovery array, it is not randomly divided according to the sequence, and may be randomly sent. Specifically, the write data request may be sent first. , then send the write data request F"; also send the write data request F", and then send the write data request E".
- the second current time slice number in the CTPN manager of the disaster recovery array is 13, so the disaster recovery array needs to modify the write data request E to include after receiving the write data request E " The write data request of number 13 is E",; likewise, after receiving the write data request F, the disaster recovery array needs to modify the write data request F to "write data request F"' containing number 13.
- Step S315 The disaster recovery array receives the instruction to take over the production array to process the host service.
- the disaster recovery array needs to take over the production array to process the host service, so the disaster recovery array needs to meet the data consistency requirement.
- the write data request that the disaster recovery array needs to receive includes a write data request E" and a write data request F".
- the disaster recovery array begins to take over the production array processing host service, indicating that the current replication cycle has been completed, and the data is consistent.
- Sexual Claim
- the disaster recovery array changes the write data request E to the write data request E, and successfully writes the second request to the second cache, and writes the data request F", before the second cache is successfully written, the production array fails, and the disaster recovery array Beginning to take over the production array processing host service, then the current replication task is not completed, and does not meet the data consistency requirements.
- the disaster recovery array changes the write data request F" to write data request F", and successfully writes After the second cache, the data request E,,, before the second cache is successfully written, the production array fails, and the disaster recovery array starts to take over the production array to process the host service, so the current replication task is not completed, and the data consistency is not satisfied. Requirements.
- the data in the cache of the disaster recovery array needs to be restored to the state when the copy task corresponding to the number 12 is completed.
- the disaster recovery array in the write data request E "modified to write data request E", and successfully write to the second cache, and write data request F, "the second cache is not successfully written as an example.
- Step S316 The disaster recovery array acquires the to-be-written address carried by the write data request that has been successfully written into the second cache in the current replication period.
- the write data request E" has been successfully written into the second cache, and the address to be written carried by the address is the address A to be written.
- Step S317 The disaster recovery array performs matching according to the to-be-written address in the information carried by the write data request corresponding to the previous number, and finds the same to-be-written address as the address A to be written.
- step S318 is performed; if not, the matching is continued in the information carried by the write data request corresponding to the previous number (for example, number 11) until Find the same to be written address as the write data request E,,, and the address to be written A.
- the information carried by the write data request corresponding to the number 12 is as shown in Table 5.
- the write data request A, carrying the address to be written and the write data request E" are the same as the address to be written.
- each write data request includes the ID of the data volume
- both the address to be written and the ID of the data volume are the same. Only when the conditions are met.
- Step S318 Generate a new write data request to write to the second cache according to the found information of the address to be written, and the new write data request includes the modified number.
- the information read from the second cache includes the address A to be written and the data A to be written (which may also contain the slave ID), based on the read information, plus the modified number (eg , modify the number from 13 to 14) to generate a new write data request.
- the modified number eg , modify the number from 13 to 14
- the disaster recovery array searches for the ID of the data volume in the second cache.
- the data to be written corresponding to the volume ID and the address to be written is the address A to be written and the latest number is sent to the host.
- the data to be written A corresponding to the number 14 is sent from the second cache to the host.
- the production array can directly send the information carried by the received write data request from the cache to the disaster recovery array, and the related information cannot be read in the data volume, thereby improving the efficiency of data replication and the disaster recovery.
- the array also guarantees data consistency.
- data replication is implemented by using snapshot data, which requires that each time the production array performs a write data request, the data carried by the write data request needs to be put into the cache, according to the data request carried in the write data request.
- To write the address read the old data saved in the address, store it in the data volume, and then write the data in the cache to the address to be written. After the completion, the response message for writing the data request can be returned. Since the step of the snapshot processing is added, the delay of the write data request processing is increased. In the embodiment of the present invention, there is no need to perform snapshot processing on the data, and although the write data request is modified, it takes less time. Therefore, the embodiment of the present invention reduces the delay of the write data request processing as compared with the prior art.
- FIG. 5 is a schematic structural diagram of a storage device 50 according to an embodiment of the present invention.
- the storage device 50 includes: a receiving module 501, a reading and writing module 502, and a current time slice number manager 503. And sending module 504.
- the receiving module 501 is configured to receive a first write data request sent by the host, where the first write data request carries data to be written and address information.
- the address information may include a logical unit address (LBA).
- LBA logical unit address
- the address information may further include an ID of the data volume of the storage device 50.
- the reading and writing module 502 is configured to add the first number to the data to be written and the address information, and write the buffer, where the first number is a current time slice number; and the first number is read from the cache. Corresponding to the data to be written and the address information.
- a current time slice number manager 503 may be included in the storage device 50.
- the current time slice number manager 503 stores a current time slice number, and the current time slice number may be represented by a numerical value, such as 0, 1, or 2, It is represented by letters, such as a, b, c, which are not limited here.
- the information carried in the modified first write data request is written into the cache, so that the first write data request carries the data to be written, the address information, and the first A number is saved in the cache.
- the first number is added to the carried information and written to the cache. It should be noted that before the current time slice number is changed, the information carried in the write data request is increased by the first number.
- the storage device 50 can read the data to be written and the address information corresponding to the first number from the cache. It can be understood that the data to be written and the address information corresponding to the first number may be more than one. .
- the replication task is that the storage device 50 sends the information carried by the write data request received by a data volume to the storage device of the disaster recovery center for a period of time, and the information carried by the write data request is added with the same current time slice number. Numbering.
- the replication task triggering may be triggered by a timer or an artificial trigger, which is not limited herein.
- the purpose of the replication is to store the to-be-written data carried in the write data request received by the storage device 50 to the storage device of the disaster recovery center. When the storage device 50 fails, the storage device of the disaster recovery center can take over the storage device 50. It is to be understood that the address information (for example, the LBA) carried in the data request is also sent to the storage device of the disaster recovery center.
- the LBA is used to indicate that the storage device of the disaster recovery center stores the address of the data to be written.
- the storage device of the disaster recovery center has the same physical structure as the storage device 50, and therefore is applicable to the LBA of the storage device 50, and is also applicable to the storage device of the disaster recovery center.
- the copy task is for one data volume of the storage device 50, and when the storage device 50 includes multiple data volumes, one copy task corresponding to each data volume.
- the current time slice number manager 503 is configured to modify the current time slice number to identify information carried by the subsequent write data request.
- the current time slice number manager 503 needs to modify the current time slice number.
- the information carried by the subsequent write data request needs to be added with another number, the other number. It is assigned to it by the modified current time slice number.
- the information carried by the write data request of the storage device that needs to be sent to the disaster recovery center can be distinguished from the information carried in the write data request that the storage device 50 is receiving in the cache.
- the sending module 504 is configured to send the to-be-written data and address information to the disaster recovery center. Storage device.
- the storage device 50 sends the data to be written and the address information corresponding to the first number read from the cache to the storage device of the disaster recovery center.
- the storage device 50 can directly send all the data to be written and the address information to the storage device of the disaster recovery center; or after obtaining the ID of the data volume of the storage device of the disaster recovery center, according to each
- the data to be written and the information of the data volume of the storage device of the disaster recovery center are generated by a write data request, and a new write data request is generated, and then sent to the storage device of the disaster recovery center.
- the information carried by the write data includes data to be written and address information, and the first number is added in the data to be written and the address information, and the address is written.
- the first number is the current time slice number.
- the replication task is triggered, the data to be written and the address information corresponding to the first number are read from the cache and sent to the storage device of the disaster recovery center.
- the replication task is performed.
- the current time slice number is modified, so that the storage device 50 adds the same number to the modified current time slice number in the information carried by the storage device 50, so that it needs to be sent to the disaster in the cache.
- FIG. 6 is a schematic structural diagram of a storage device 60 according to an embodiment of the present invention. As shown in FIG. 6, the storage device 60 includes: a receiving module 601, a searching module 602, and a writing module 604.
- the receiving module 601 is configured to receive address information sent by the storage device 50.
- the storage device 60 may receive the data to be written and the address information sent by the storage device 50; and may also receive the write data request sent by the storage device 50, where the write data request packet
- the data and address information are to be written, and the address information may be a logical block address (LBA).
- LBA logical block address
- the address information may also include an ID of the data volume of the storage device 60. It can be understood that there can be more than one address information here.
- the storage device 60 After receiving the data to be written and the address information, the storage device 60 adds the same number as the current time slice number to the data to be written and the address information, and writes the cache so that the same time slice number is saved in the cache. Number, data to be written and address information.
- the storage device 60 may also include a current time slice number manager 603.
- the current time slice number manager 603 stores the current time slice number, and the current time slice number may be represented by a numerical value, for example, 0. 1, 2, can be represented by letters, such as a, b, c, which are not limited here.
- the current time slice number here may not be associated with the current time slice number in the storage device 50.
- the searching module 602 is configured to: when determining that the storage device 50 is faulty, the storage device 60 acquires data to be written corresponding to the first number according to the address information, and the address information corresponding to the first number is The received address information is the same, and the first number is the number before the current time slice number.
- the storage device 50 sends the information carried by the write data request, and the storage device 60 can receive the information carried by the write data request, and write each message.
- the information carried in the data request is added to the same number as the current time slice number and stored in the cache.
- the storage device 60 may only receive the corresponding portion of the current time slice number of the storage device 50 to be written, in which case the data held by the storage device 60 may be untrue. The data, if directly replacing the storage device 50, data consistency can not be guaranteed.
- the storage device 60 searches for the latest number corresponding to the address information. And then send the current time slice number corresponding to the write to the host, however The data is not true. Therefore, at this time, it is necessary to restore the data in the cache of the storage device 60 to the data corresponding to the number before the current time slice number of the storage device 60.
- the address information eg, LBA
- the method for determining that the storage device 50 is faulty may be that the control center sends a signal to the storage device 60, where the signal is used to indicate that the storage device 50 is faulty, and the storage device 60 needs to take over the storage device 50 to process the host service.
- the control center can send an indication of successful replication to storage device 50 and storage device 60, respectively. If the storage device 60 does not receive the indication, then the current replication task is not completed.
- the completion of the replication task means that the storage device 50 transmits the information carried by all the write data requests corresponding to the current time slice number to the storage device 60, and the storage device 60 also receives the completion.
- the storage device 60 determines that the storage device 50 has failed, if the current copy task has been completed, the storage device 60 can directly take over the storage device 50 and the data consistency can be guaranteed. This situation is not within the scope of the embodiments of the present invention.
- the data in the cache of the storage device 60 needs to be restored to the data corresponding to the number preceding its current time slice number.
- the specific recovery method may be: searching for the same address information as the address information in the address information corresponding to the previous number of the current time slice number according to the received address information, and if not, continuing to The address information corresponding to the number is searched until the address information is found, and then the data to be written corresponding to the number is obtained.
- the writing module 604 is configured to add a second number to the data to be written and the address information corresponding to the first number, and write the buffer.
- the second number is a modified number of the current time slice number. In this embodiment, it is the latest number saved in the cache.
- the storage device 60 searches for the latest number corresponding to the address information to be the second. No. The data to be written corresponding to the second number is sent to the host. This ensures data consistency.
- the storage device 60 receives the address information sent by the storage device 50. When the storage device 50 fails, the data to be written corresponding to the number before the current time slice number is obtained according to the address information, and the current time slice is obtained.
- an embodiment of the present invention provides a schematic diagram of a storage device 700.
- the storage device 700 may include a storage device that is known in the prior art.
- the specific embodiment of the present invention does not limit the specific implementation of the storage device 700.
- the storage device 700 includes:
- a processor 710 a communication interface 720, a memory 730, and a communication bus 740.
- the processor 710, the communication interface 720, and the memory 730 complete communication with each other via the communication bus 740.
- the communication interface 720 is configured to communicate with a network element, such as a host or a switch.
- the processor 710 is configured to execute the program 732.
- program 732 can include program code, the program code including computer operating instructions.
- the processor 710 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement embodiments of the present invention.
- CPU central processing unit
- ASIC Application Specific Integrated Circuit
- the memory 730 is configured to store the program 732.
- Memory 730 may include high speed RAM memory and may also include non-volatile memory, such as at least one disk memory.
- the program 732 may specifically include:
- the receiving module 501 is configured to receive a first write data request sent by the host, where the first write data request carries data to be written and address information.
- the reading and writing module 502 is configured to add the first number to the data to be written and the address information, and write And entering the cache, wherein the first number is a current time slice number; and the to-be-written data and address information corresponding to the first number are read from the cache.
- the current time slice number manager 503 is configured to modify the current time slice number to identify information carried by the subsequent write data request.
- the sending module 504 is configured to send the data to be written and the address information to the storage device of the disaster recovery center.
- an embodiment of the present invention provides a schematic diagram of a storage device 800.
- the storage device 800 can include storage devices known in the art, and the specific embodiments of the present invention do not limit the specific implementation of the storage device 800.
- Storage device 800 includes:
- the processor 810, the communication interface 820, and the memory 830 complete communication with each other via the communication bus 840.
- the communication interface 820 is configured to communicate with a network element, such as a host or a switch.
- the processor 810 is configured to execute the program 832.
- program 832 can include program code, the program code including computer operating instructions.
- the processor 810 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement embodiments of the present invention.
- CPU central processing unit
- ASIC Application Specific Integrated Circuit
- the memory 830 is configured to store the program 832.
- the memory 830 may include a high speed RAM memory and may also include a non-volatile memory such as at least one disk memory.
- the program 832 may specifically include:
- the receiving module 601 is configured to receive address information sent by the storage device 50.
- the searching module 602 is configured to: when determining that the storage device 50 is faulty, the storage device 60 acquires data to be written corresponding to the first number according to the address information, and the address information corresponding to the first number is The received address information is the same, and the first number is the number before the current time slice number.
- the writing module 604 is configured to add a second number to the data to be written and the address information corresponding to the first number, and write the buffer.
- the storage system consists of a production center and at least two disaster recovery centers.
- Production centers include production hosts, connected equipment, and production arrays.
- the system architecture of the disaster recovery center is similar to that of the production center, including disaster recovery hosts, connected devices, and disaster recovery arrays.
- the production center and the disaster recovery center can transmit data through IP (Internet Protocol) or FC (Fiber Chanel).
- IP Internet Protocol
- FC Fiber Chanel
- the control center can be deployed on the production center or on the disaster recovery center.
- Third-party devices can be deployed between the production center and the disaster recovery center. in.
- the control center is configured to send a signal to the disaster recovery array to take over the production array to process the host service when the production array fails.
- Both the production host and the disaster recovery host can be any computing device known in the art, such as servers, desktop computers, and the like. Inside the host, an operating system and other applications are installed.
- connection device can include any interface between the storage device known to the prior art and the host, such as a fiber switch, or other existing switch.
- each array and disaster recovery may be the current techniques known storage device, such as a Redundant Array of Independent Disks bad 1 J (Redundant Arrays of Inexpensive Disks , RAID), disk H (Just a Bunch Of Disks, JBOD), Direct Memory Take the memory (Direct Access Storage Device, DASD )
- a Redundant Array of Independent Disks bad 1 J Redundant Arrays of Inexpensive Disks , RAID
- disk H Just a Bunch Of Disks, JBOD
- Direct Memory Take the memory Direct Access Storage Device, DASD
- One or more interconnected disk drives such as tape libraries, tape storage devices with one or more storage units.
- the storage space of the production array may include multiple data volumes.
- the data volume is a logical storage space mapped by physical storage space.
- the data volume may be a Logic Unit Number (LUN) or a file system.
- LUN Logic Unit Number
- the structure of the disaster recovery array is similar to the production array.
- the task of the production array copying data from one of its stored data volumes to a disaster recovery array is referred to as a replication relationship (also known as pa i r ).
- Each replication relationship corresponds to a unique identifier (for example, ID). Since the production array continuously receives write data requests sent by the host before the disaster occurs, the production array also needs to continuously copy its stored data to the disaster recovery array. Then, a replication relationship can be divided into several time segments. The task that the production array sends the information carried by the data request received by the data volume to the disaster recovery array in each time is called a replication task.
- the current time slice number manager may be included in the current time slice number manager, and the current time slice number may be represented by a numerical value, such as 0, 1, 2, or letters. Representation, for example, a, b, c, is not limited here. It should be noted that the current time slice number is applicable to each disaster recovery array, and each current time slice number is modified each time the replication task is triggered.
- the production array When the replication task corresponding to the first disaster recovery array is triggered, the production array will number the current time slice. The value 1 is modified to a value of 2 so that a value of 2 is added to the data to be written and the address information carried by the write data request received later. The data to be written and the address information corresponding to the number of 1 are sent to the first disaster recovery array.
- the production array changes the current time slice number from the value 2 to the value 3, so that the value of the data to be written and the address information carried in the received write data request is increased to The number of 3.
- the production array changes the current time slice number from the value 3 to the value 4, so that the value of the data to be written and the address information carried in the received write data request is increased to The number of 4.
- the production array changes the current time slice number from the value 4 to the value 5, so that the data and address information to be written carried in the write data request received later is included. Increase the number with a value of 5. It is assumed that only the current time slice number is recorded in the production array, and accordingly, only the data to be written and the address information corresponding to the number of 4 are sent to the first disaster recovery array, then the number is 2 The corresponding data to be written and the address information to be written, and the data to be written and the address information corresponding to the number of 3 will be missed, and the data stored in the first disaster recovery array will be inconsistent with the production array. Similarly, the second disaster recovery array and the third disaster recovery array also face the problem that the received data to be written and the address information are incomplete.
- FIG. 10 is a data replication proposed by the present invention to solve the problem.
- the production array is referred to as a first storage device, and at least two disaster recovery arrays are used. One of them is called a second storage device. It should be noted that the execution of the following steps may be specifically referred to the embodiment shown in FIG. 2 to FIG. 4.
- the method includes:
- Step 41 When the current copy task is triggered, the first storage device reads the current time slice number.
- the replication task trigger can be triggered by a timer, or it can be a human trigger, and others. The trigger mode is not limited here.
- the first storage device can read the current time slice number from the current time slice number manager. It should be noted that, when the replication task is triggered, the first storage device modifies the current time slice number of the trigger time, where the current time slice number read by the first storage device refers to the modified current time slice number.
- the current time slice number before modification can be referred to as a historical time slice number.
- Step 42 Read the second number, where the second number is a number corresponding to the most recently completed copy task associated with the current copy task.
- the most recently completed replication task is associated with the current replication task, that is, the current replication task and the most recently completed replication task belong to the same replication relationship.
- each replication relationship is Have a unique ID.
- the first storage device can receive the ID and read the second number based on the ID.
- the ID when the current replication task is triggered by a timer, the ID may be carried in a timer; when the current replication task is manually triggered, the first storage device may receive by a signal or the like. The ID.
- the number corresponding to the completed copy task is recorded every time a copy task is completed.
- Step 43 Determine a first number according to the current time slice number and the second number, where the first number is a number before a current time slice number when the copy task is triggered, and the first number is The number after the second number.
- the current time slice number is the value 5 and the second number is the value 2, then the number within the interval (2, 5) can be determined as the first number. It should be noted that the interval is The open interval does not include the value 2 and the value 5.
- Step 44 Copy the to-be-copied data and the address information of the data to be copied saved in the cache corresponding to the first number to the second storage device.
- the to-be-copied data corresponding to the first number and the address information of the data to be copied are read from the cache, and the address information of the data to be copied and the data to be copied is sent to the second storage device.
- the first storage device may directly send the data to be copied and the address information of the data to be copied to the second storage device, or generate a write data request according to the data to be copied and the address information of the data to be copied. Sending the write data request to the second storage device.
- the address information corresponding to the latest number and the data to be copied can be sent to the second storage device.
- the most recent number is the last generated number. For example, suppose that the current time slice number is modified by adding 1 each time, then the latest number is the number with the largest index value.
- the first storage device determines the first number according to the current time slice number and the second number, where the second number is before the current replication task, and most recently a number corresponding to the completed copy task, the first number is before the current time slice number when the current copy task is triggered, and the number after the second number is saved in the cache corresponding to the first number.
- the address information of the data to be copied and the data to be copied is copied to the second storage device. Since all the numbers between the second number and the current time slice number can be determined as the first number, as long as it is determined as the first number, the corresponding data to be copied and the address information of the data to be copied can be copied to the first Two storage devices.
- the first storage device can find the data to be copied and the data to be copied that are not copied to the second storage device by using the second number. Information, and copy it to the second storage device, ensuring that the copy is complete Integrity.
- the method before the current replication task is triggered, the method further includes: receiving a first write data request, where the first write data request includes the to-be-copied data and address information of the data to be copied; Adding a first number to the address information of the data to be copied and the data to be copied, and writing the cache, the first number being a historical time slice number.
- the historical time slice number refers to a current time slice number corresponding to a time when the first write data request is received. It can be seen from the embodiment shown in FIG. 2 to FIG. 4 that the historical time slice number needs to be changed to the current time slice number when the copy task is triggered.
- it may also include:
- the address information of the target data is the same as the address information of the data to be copied, the data to be copied stored in the cache is replaced with the target data;
- the target data with the third number added and the address information of the target data are written into the cache.
- the target data after the third number is added and the address information of the target data is written into the buffer.
- FIG. 11 is a schematic structural diagram of a storage device according to an embodiment of the present invention.
- the storage device includes: a read/write module 52, a determination module 53 and a replication module 54.
- the reading and writing module 52 is configured to read the current time slice number when the current copy task is triggered, and read the second number, where the second number is associated with the current copy task, and most recently The number corresponding to the completed replication task.
- a determining module 53 configured to determine, according to the current time slice number and the second number, a first number, where the first number is a number before a current time slice number when the current copy task is triggered, and The first number is the number after the second number.
- the copying module 54 is configured to copy the to-be-copied data and the address information of the data to be copied stored in the cache corresponding to the first number to the second storage device.
- the first storage device determines the first number according to the current time slice number and the second number, where the second number is before the current replication task, and most recently a number corresponding to the completed copy task, the first number is before the current time slice number when the current copy task is triggered, and the number after the second number is saved in the cache corresponding to the first number.
- the address information of the data to be copied and the data to be copied is copied to the second storage device. Since all the numbers between the second number and the current time slice number can be determined as the first number, as long as it is determined as the first number, the corresponding data to be copied and the address information of the data to be copied can be copied to the first Two storage devices.
- the first storage device can find the data to be copied and the data to be copied that are not copied to the second storage device by using the second number.
- the information, and copy it to the second storage device ensures the integrity of the copy.
- the storage device may further include: a recording module 55, configured to record the second number.
- the most recently completed replication task is associated with the current replication task, that is, the current replication task and the most recently completed replication task belong to a replication relationship;
- the storage device further includes a receiving module 51;
- the receiving module 51 is configured to receive an identifier corresponding to the replication relationship.
- the read/write module is configured to read the second number corresponding to the current replication task according to the identifier.
- the receiving module 51 is further configured to: before the current replication task is triggered, receive a first write data request, where the first write data request includes the to-be-copied data and address information of the data to be copied;
- the read/write module 52 is further configured to add a first number to the cached data and the address information of the data to be copied, and write the cache to the cache, where the first number is a historical time slice number.
- the current time slice number is modified by the historical time slice number.
- the copying module 54 is specifically configured to: when the address information corresponds to the plurality of numbers, determine that the latest number in the number corresponding to the address information is the first number; The address information of the data to be copied and the data to be copied saved in the cache is copied to the second storage device.
- the storage device provided by the embodiment of the present invention is used to perform the data copying method described in the foregoing embodiments.
- the storage device provided by the embodiment of the present invention is used to perform the data copying method described in the foregoing embodiments.
- a storage device includes:
- a processor 101 a memory 102, a system bus (abbreviated as bus) 105, and a communication interface 103.
- the processor 101, the memory 102, and the communication interface 103 are connected by the system bus 105 and communicate with each other.
- Processor 101 may be a single core or multi-core central processing unit, or a particular integrated circuit, or one or more integrated circuits configured to implement embodiments of the present invention.
- the memory 102 can be a high speed RAM memory or a nonvolatile memory.
- non-volatile memory for example, at least one hard disk is stored in the device.
- Communication interface 103 is used to communicate with the storage device.
- Memory 102 is used to store computer execution instructions 1021. Specifically, the program code may be included in the computer execution instruction 1021.
- the processor 101 runs a computer execution instruction 1021, which can execute the map.
- the disclosed apparatus and method may be implemented in other manners.
- the device embodiments described above are merely illustrative.
- the division of the modules is only a logical function division.
- there may be another division manner for example, multiple modules or components may be combined or Can be integrated into another device, or some features can be ignored, or not executed.
- the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some communication interface, device or module, and may be electrical, mechanical or otherwise.
- the modules described as separate components may or may not be physically separated.
- the components displayed as modules may or may not be physical sub-modules, that is, may be located in one place, or may be distributed to multiple network sub-modules. on. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- each functional module in each embodiment of the present invention may be integrated into one processing module, or each module may exist physically separately, or two or more modules may be integrated into one module.
- a person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium.
- the storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Quality & Reliability (AREA)
- Human Computer Interaction (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Memory System Of A Hierarchy Structure (AREA)
- Retry When Errors Occur (AREA)
- Computer Security & Cryptography (AREA)
Abstract
Description
Claims
Priority Applications (9)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201380042349.5A CN104520802B (zh) | 2013-07-26 | 2013-11-15 | 数据发送方法、数据接收方法和存储设备 |
JP2015527787A JP6344798B2 (ja) | 2013-07-26 | 2013-11-15 | データ送信方法、データ受信方法、及びストレージデバイス |
ES13878530.8T ES2610784T3 (es) | 2013-07-26 | 2013-11-15 | Método de envío de datos, método de recepción de datos y dispositivo de almacenamiento |
KR1020147029051A KR101602312B1 (ko) | 2013-07-26 | 2013-11-15 | 데이터 송신 방법, 데이터 수신 방법, 및 저장 장치 |
AU2013385792A AU2013385792B2 (en) | 2013-07-26 | 2013-11-15 | Data sending method, data receiving method, and storage device |
RU2014145359/08A RU2596585C2 (ru) | 2013-07-26 | 2013-11-15 | Способ отправки данных, способ приема данных и устройство хранения данных |
EP13878530.8A EP2849048B1 (en) | 2013-07-26 | 2013-11-15 | Data sending method, data receiving method and storage device |
US14/582,556 US9311191B2 (en) | 2013-07-26 | 2014-12-24 | Method for a source storage device sending data to a backup storage device for storage, and storage device |
US15/064,890 US10108367B2 (en) | 2013-07-26 | 2016-03-09 | Method for a source storage device sending data to a backup storage device for storage, and storage device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNPCT/CN2013/080203 | 2013-07-26 | ||
PCT/CN2013/080203 WO2015010327A1 (zh) | 2013-07-26 | 2013-07-26 | 数据发送方法、数据接收方法和存储设备 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/582,556 Continuation US9311191B2 (en) | 2013-07-26 | 2014-12-24 | Method for a source storage device sending data to a backup storage device for storage, and storage device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015010394A1 true WO2015010394A1 (zh) | 2015-01-29 |
Family
ID=50253404
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2013/080203 WO2015010327A1 (zh) | 2013-07-26 | 2013-07-26 | 数据发送方法、数据接收方法和存储设备 |
PCT/CN2013/087229 WO2015010394A1 (zh) | 2013-07-26 | 2013-11-15 | 数据发送方法、数据接收方法和存储设备 |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2013/080203 WO2015010327A1 (zh) | 2013-07-26 | 2013-07-26 | 数据发送方法、数据接收方法和存储设备 |
Country Status (13)
Country | Link |
---|---|
US (2) | US9311191B2 (zh) |
EP (2) | EP3179359B1 (zh) |
JP (2) | JP6344798B2 (zh) |
KR (1) | KR101602312B1 (zh) |
CN (1) | CN103649901A (zh) |
AU (2) | AU2013385792B2 (zh) |
CA (1) | CA2868247C (zh) |
DK (1) | DK3179359T3 (zh) |
ES (2) | ES2610784T3 (zh) |
HU (1) | HUE037094T2 (zh) |
NO (1) | NO3179359T3 (zh) |
RU (1) | RU2596585C2 (zh) |
WO (2) | WO2015010327A1 (zh) |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103649901A (zh) * | 2013-07-26 | 2014-03-19 | 华为技术有限公司 | 数据发送方法、数据接收方法和存储设备 |
CN103488431A (zh) * | 2013-09-10 | 2014-01-01 | 华为技术有限公司 | 一种写数据方法及存储设备 |
US9552248B2 (en) | 2014-12-11 | 2017-01-24 | Pure Storage, Inc. | Cloud alert to replica |
US10545987B2 (en) * | 2014-12-19 | 2020-01-28 | Pure Storage, Inc. | Replication to the cloud |
CN106407040B (zh) | 2016-09-05 | 2019-05-24 | 华为技术有限公司 | 一种远程数据复制方法及*** |
CN107844259B (zh) * | 2016-09-18 | 2020-06-16 | 华为技术有限公司 | 数据访问方法、路由装置和存储*** |
CN112068992A (zh) * | 2016-10-28 | 2020-12-11 | 华为技术有限公司 | 一种远程数据复制方法、存储设备及存储*** |
CN108076090B (zh) * | 2016-11-11 | 2021-05-18 | 华为技术有限公司 | 数据处理方法和存储管理*** |
CN106598768B (zh) * | 2016-11-28 | 2020-02-14 | 华为技术有限公司 | 一种处理写请求的方法、装置和数据中心 |
CN108449277B (zh) * | 2016-12-12 | 2020-07-24 | 华为技术有限公司 | 一种报文发送方法及装置 |
CN106776369B (zh) * | 2016-12-12 | 2020-07-24 | 苏州浪潮智能科技有限公司 | 一种缓存镜像的方法及装置 |
CN108475254A (zh) * | 2016-12-16 | 2018-08-31 | 华为技术有限公司 | 对象复制方法、装置及对象存储设备 |
CN106776147B (zh) * | 2016-12-29 | 2020-10-09 | 华为技术有限公司 | 一种差异数据备份方法和差异数据备份装置 |
CN107122261B (zh) * | 2017-04-18 | 2020-04-07 | 杭州宏杉科技股份有限公司 | 一种存储设备的数据读写方法及装置 |
CN107577421A (zh) * | 2017-07-31 | 2018-01-12 | 深圳市牛鼎丰科技有限公司 | 智能设备扩容方法、装置、存储介质和计算机设备 |
AU2018357856B2 (en) * | 2017-10-31 | 2021-03-18 | Ab Initio Technology Llc | Managing a computing cluster using time interval counters |
CN108052294B (zh) * | 2017-12-26 | 2021-05-28 | 郑州云海信息技术有限公司 | 一种分布式存储***的修改写方法和修改写*** |
US11216370B2 (en) * | 2018-02-20 | 2022-01-04 | Medtronic, Inc. | Methods and devices that utilize hardware to move blocks of operating parameter data from memory to a register set |
US10642521B2 (en) * | 2018-05-11 | 2020-05-05 | International Business Machines Corporation | Scaling distributed queues in a distributed storage network |
CN109032527B (zh) * | 2018-07-27 | 2021-07-27 | 深圳华大北斗科技有限公司 | 数据处理方法、存储介质及计算机设备 |
US10942725B2 (en) * | 2018-07-30 | 2021-03-09 | Ford Global Technologies, Llc | Over the air Ecu update |
US11038961B2 (en) | 2018-10-26 | 2021-06-15 | Western Digital Technologies, Inc. | Ethernet in data storage device |
CN109697035B (zh) * | 2018-12-24 | 2022-03-29 | 深圳市明微电子股份有限公司 | 级联设备的地址数据的写入方法、写入设备及存储介质 |
US11620230B2 (en) * | 2019-05-24 | 2023-04-04 | Texas Instruments Incorporated | Methods and apparatus to facilitate read-modify-write support in a coherent victim cache with parallel data paths |
US11119862B2 (en) * | 2019-10-11 | 2021-09-14 | Seagate Technology Llc | Delta information volumes to enable chained replication of data by uploading snapshots of data to cloud |
CN114731282B (zh) * | 2019-11-22 | 2023-06-02 | 华为技术有限公司 | 处理非缓存写数据请求的方法、缓存器和节点 |
US11755230B2 (en) * | 2021-04-22 | 2023-09-12 | EMC IP Holding Company LLC | Asynchronous remote replication of snapshots |
US12008018B2 (en) * | 2021-04-22 | 2024-06-11 | EMC IP Holding Company LLC | Synchronous remote replication of snapshots |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101751230A (zh) * | 2009-12-29 | 2010-06-23 | 成都市华为赛门铁克科技有限公司 | 标定i/o数据的时间戳的设备及方法 |
CN102306115A (zh) * | 2011-05-20 | 2012-01-04 | 成都市华为赛门铁克科技有限公司 | 异步远程复制方法、***及设备 |
Family Cites Families (56)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR0128271B1 (ko) * | 1994-02-22 | 1998-04-15 | 윌리암 티. 엘리스 | 재해회복을 위한 일관성 그룹 형성방법 및 레코드갱싱의 섀도잉 방법, 주시스템, 원격데이타 섀도잉 시스템과 비동기 원격데이타 복제 시스템 |
US5758359A (en) * | 1996-10-24 | 1998-05-26 | Digital Equipment Corporation | Method and apparatus for performing retroactive backups in a computer system |
US6081875A (en) | 1997-05-19 | 2000-06-27 | Emc Corporation | Apparatus and method for backup of a disk storage system |
JP2000137638A (ja) * | 1998-10-29 | 2000-05-16 | Hitachi Ltd | 情報記憶システム |
US6526418B1 (en) * | 1999-12-16 | 2003-02-25 | Livevault Corporation | Systems and methods for backing up data files |
US6675177B1 (en) * | 2000-06-21 | 2004-01-06 | Teradactyl, Llc | Method and system for backing up digital data |
US6988165B2 (en) * | 2002-05-20 | 2006-01-17 | Pervasive Software, Inc. | System and method for intelligent write management of disk pages in cache checkpoint operations |
JP2004013367A (ja) * | 2002-06-05 | 2004-01-15 | Hitachi Ltd | データ記憶サブシステム |
US7761421B2 (en) | 2003-05-16 | 2010-07-20 | Hewlett-Packard Development Company, L.P. | Read, write, and recovery operations for replicated data |
JP2005309550A (ja) * | 2004-04-19 | 2005-11-04 | Hitachi Ltd | リモートコピー方法及びリモートコピーシステム |
JP4267421B2 (ja) * | 2003-10-24 | 2009-05-27 | 株式会社日立製作所 | リモートサイト及び/又はローカルサイトのストレージシステム及びリモートサイトストレージシステムのファイル参照方法 |
US7054883B2 (en) * | 2003-12-01 | 2006-05-30 | Emc Corporation | Virtual ordered writes for multiple storage devices |
ES2345388T3 (es) * | 2004-02-12 | 2010-09-22 | Irdeto Access B.V. | Metodo y sistema de almacenamiento de datos externo. |
JP4455927B2 (ja) * | 2004-04-22 | 2010-04-21 | 株式会社日立製作所 | バックアップ処理方法及び実施装置並びに処理プログラム |
CN100359476C (zh) * | 2004-06-03 | 2008-01-02 | 华为技术有限公司 | 一种快照备份的方法 |
JP4519563B2 (ja) * | 2004-08-04 | 2010-08-04 | 株式会社日立製作所 | 記憶システム及びデータ処理システム |
JP4377790B2 (ja) * | 2004-09-30 | 2009-12-02 | 株式会社日立製作所 | リモートコピーシステムおよびリモートコピー方法 |
US7519851B2 (en) * | 2005-02-08 | 2009-04-14 | Hitachi, Ltd. | Apparatus for replicating volumes between heterogenous storage systems |
US8127174B1 (en) * | 2005-02-28 | 2012-02-28 | Symantec Operating Corporation | Method and apparatus for performing transparent in-memory checkpointing |
US8005795B2 (en) * | 2005-03-04 | 2011-08-23 | Emc Corporation | Techniques for recording file operations and consistency points for producing a consistent copy |
US7310716B2 (en) * | 2005-03-04 | 2007-12-18 | Emc Corporation | Techniques for producing a consistent copy of source data at a target location |
JP2007066154A (ja) | 2005-09-01 | 2007-03-15 | Hitachi Ltd | データをコピーして複数の記憶装置に格納するストレージシステム |
CA2632935C (en) * | 2005-12-19 | 2014-02-04 | Commvault Systems, Inc. | Systems and methods for performing data replication |
US7761663B2 (en) * | 2006-02-16 | 2010-07-20 | Hewlett-Packard Development Company, L.P. | Operating a replicated cache that includes receiving confirmation that a flush operation was initiated |
JP2007323507A (ja) * | 2006-06-02 | 2007-12-13 | Hitachi Ltd | 記憶システム並びにこれを用いたデータの処理方法 |
US8150805B1 (en) * | 2006-06-30 | 2012-04-03 | Symantec Operating Corporation | Consistency interval marker assisted in-band commands in distributed systems |
US7885923B1 (en) * | 2006-06-30 | 2011-02-08 | Symantec Operating Corporation | On demand consistency checkpoints for temporal volumes within consistency interval marker based replication |
US8726242B2 (en) * | 2006-07-27 | 2014-05-13 | Commvault Systems, Inc. | Systems and methods for continuous data replication |
CN100485629C (zh) * | 2006-08-15 | 2009-05-06 | 英业达股份有限公司 | 群聚式计算机***高速缓存数据备份处理方法及*** |
GB0616257D0 (en) * | 2006-08-16 | 2006-09-27 | Ibm | Storage management system for preserving consistency of remote copy data |
US8145865B1 (en) * | 2006-09-29 | 2012-03-27 | Emc Corporation | Virtual ordered writes spillover mechanism |
KR20080033763A (ko) | 2006-10-13 | 2008-04-17 | 삼성전자주식회사 | 와이브로 네트워크에서의 상호인증을 통한 핸드오버 방법및 그 시스템 |
US8768890B2 (en) * | 2007-03-14 | 2014-07-01 | Microsoft Corporation | Delaying database writes for database consistency |
JP4964714B2 (ja) * | 2007-09-05 | 2012-07-04 | 株式会社日立製作所 | ストレージ装置及びデータの管理方法 |
US8073922B2 (en) * | 2007-07-27 | 2011-12-06 | Twinstrata, Inc | System and method for remote asynchronous data replication |
US8140772B1 (en) * | 2007-11-06 | 2012-03-20 | Board Of Governors For Higher Education, State Of Rhode Island And Providence Plantations | System and method for maintaining redundant storages coherent using sliding windows of eager execution transactions |
CN101634968B (zh) * | 2008-01-17 | 2011-12-14 | 四川格瑞特科技有限公司 | 一种用于备份***的海量数据高速缓存器的构造方法 |
EP3699765A1 (en) | 2008-08-08 | 2020-08-26 | Amazon Technologies, Inc. | Providing executing programs with reliable access to non-local block data storage |
US8250031B2 (en) | 2008-08-26 | 2012-08-21 | Hitachi, Ltd. | Low traffic failback remote copy |
US8767934B2 (en) | 2008-09-03 | 2014-07-01 | Avaya Inc. | Associating a topic with a telecommunications address |
US8762642B2 (en) * | 2009-01-30 | 2014-06-24 | Twinstrata Inc | System and method for secure and reliable multi-cloud data replication |
US8793288B2 (en) * | 2009-12-16 | 2014-07-29 | Sap Ag | Online access to database snapshots |
US9389892B2 (en) * | 2010-03-17 | 2016-07-12 | Zerto Ltd. | Multiple points in time disk images for disaster recovery |
JP5170169B2 (ja) * | 2010-06-18 | 2013-03-27 | Necシステムテクノロジー株式会社 | ディスクアレイ装置間のリモートコピー処理システム、処理方法、及び処理用プログラム |
CN101901173A (zh) * | 2010-07-22 | 2010-12-01 | 上海骊畅信息科技有限公司 | 一种灾备***及灾备方法 |
US8443149B2 (en) * | 2010-09-01 | 2013-05-14 | International Business Machines Corporation | Evicting data from a cache via a batch file |
US8255637B2 (en) * | 2010-09-27 | 2012-08-28 | Infinidat Ltd. | Mass storage system and method of operating using consistency checkpoints and destaging |
US8667236B2 (en) * | 2010-09-29 | 2014-03-04 | Hewlett-Packard Development Company, L.P. | Host based write ordering for asynchronous replication |
US9792941B2 (en) * | 2011-03-23 | 2017-10-17 | Stormagic Limited | Method and system for data replication |
CN103092526B (zh) | 2011-10-31 | 2016-03-30 | 国际商业机器公司 | 在存储设备间进行数据迁移的方法和装置 |
US8806281B1 (en) * | 2012-01-23 | 2014-08-12 | Symantec Corporation | Systems and methods for displaying backup-status information for computing resources |
JP6183876B2 (ja) * | 2012-03-30 | 2017-08-23 | 日本電気株式会社 | レプリケーション装置、レプリケーション方法及びプログラム |
US20130339569A1 (en) * | 2012-06-14 | 2013-12-19 | Infinidat Ltd. | Storage System and Method for Operating Thereof |
US10318495B2 (en) * | 2012-09-24 | 2019-06-11 | Sandisk Technologies Llc | Snapshots for a non-volatile device |
US9311014B2 (en) * | 2012-11-29 | 2016-04-12 | Infinidat Ltd. | Storage system and methods of mapping addresses of snapshot families |
CN103649901A (zh) * | 2013-07-26 | 2014-03-19 | 华为技术有限公司 | 数据发送方法、数据接收方法和存储设备 |
-
2013
- 2013-07-26 CN CN201380001270.8A patent/CN103649901A/zh active Pending
- 2013-07-26 CA CA2868247A patent/CA2868247C/en active Active
- 2013-07-26 WO PCT/CN2013/080203 patent/WO2015010327A1/zh active Application Filing
- 2013-11-15 NO NO16177686A patent/NO3179359T3/no unknown
- 2013-11-15 JP JP2015527787A patent/JP6344798B2/ja active Active
- 2013-11-15 DK DK16177686.9T patent/DK3179359T3/en active
- 2013-11-15 HU HUE16177686A patent/HUE037094T2/hu unknown
- 2013-11-15 KR KR1020147029051A patent/KR101602312B1/ko active IP Right Grant
- 2013-11-15 ES ES13878530.8T patent/ES2610784T3/es active Active
- 2013-11-15 EP EP16177686.9A patent/EP3179359B1/en active Active
- 2013-11-15 RU RU2014145359/08A patent/RU2596585C2/ru active
- 2013-11-15 ES ES16177686.9T patent/ES2666580T3/es active Active
- 2013-11-15 AU AU2013385792A patent/AU2013385792B2/en active Active
- 2013-11-15 EP EP13878530.8A patent/EP2849048B1/en active Active
- 2013-11-15 WO PCT/CN2013/087229 patent/WO2015010394A1/zh active Application Filing
-
2014
- 2014-12-24 US US14/582,556 patent/US9311191B2/en active Active
-
2016
- 2016-03-09 US US15/064,890 patent/US10108367B2/en active Active
- 2016-05-19 AU AU2016203273A patent/AU2016203273A1/en not_active Abandoned
-
2017
- 2017-12-05 JP JP2017233306A patent/JP2018041506A/ja active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101751230A (zh) * | 2009-12-29 | 2010-06-23 | 成都市华为赛门铁克科技有限公司 | 标定i/o数据的时间戳的设备及方法 |
CN102306115A (zh) * | 2011-05-20 | 2012-01-04 | 成都市华为赛门铁克科技有限公司 | 异步远程复制方法、***及设备 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2849048A4 * |
Also Published As
Publication number | Publication date |
---|---|
US9311191B2 (en) | 2016-04-12 |
AU2013385792B2 (en) | 2016-04-14 |
EP2849048A1 (en) | 2015-03-18 |
RU2596585C2 (ru) | 2016-09-10 |
EP2849048B1 (en) | 2016-10-19 |
AU2016203273A1 (en) | 2016-06-09 |
US20160188240A1 (en) | 2016-06-30 |
AU2013385792A1 (en) | 2015-02-12 |
EP2849048A4 (en) | 2015-05-27 |
WO2015010327A1 (zh) | 2015-01-29 |
CN103649901A (zh) | 2014-03-19 |
ES2610784T3 (es) | 2017-05-03 |
NO3179359T3 (zh) | 2018-08-04 |
KR101602312B1 (ko) | 2016-03-21 |
ES2666580T3 (es) | 2018-05-07 |
HUE037094T2 (hu) | 2018-08-28 |
CA2868247A1 (en) | 2015-01-26 |
EP3179359A1 (en) | 2017-06-14 |
JP2015527670A (ja) | 2015-09-17 |
EP3179359B1 (en) | 2018-03-07 |
RU2014145359A (ru) | 2016-05-27 |
KR20150035507A (ko) | 2015-04-06 |
US10108367B2 (en) | 2018-10-23 |
DK3179359T3 (en) | 2018-06-14 |
JP2018041506A (ja) | 2018-03-15 |
JP6344798B2 (ja) | 2018-06-20 |
US20150113317A1 (en) | 2015-04-23 |
CA2868247C (en) | 2017-04-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2015010394A1 (zh) | 数据发送方法、数据接收方法和存储设备 | |
US11461202B2 (en) | Remote data replication method and system | |
US10467246B2 (en) | Content-based replication of data in scale out system | |
JP6264666B2 (ja) | データ格納方法、データストレージ装置、及びストレージデバイス | |
WO2015085530A1 (zh) | 数据复制方法及存储*** | |
CN107133132B (zh) | 数据发送方法、数据接收方法和存储设备 | |
WO2015085529A1 (zh) | 数据复制方法、数据复制装置和存储设备 | |
WO2019080370A1 (zh) | 一种数据读写方法、装置和存储服务器 | |
WO2018076633A1 (zh) | 一种远程数据复制方法、存储设备及存储*** | |
WO2014190501A1 (zh) | 数据恢复方法、存储设备和存储*** | |
WO2022033269A1 (zh) | 数据处理的方法、设备及*** | |
US10740189B2 (en) | Distributed storage system | |
JP6376626B2 (ja) | データ格納方法、データストレージ装置、及びストレージデバイス | |
US10656867B2 (en) | Computer system, data management method, and data management program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
REEP | Request for entry into the european phase |
Ref document number: 2013878530 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013878530 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013385792 Country of ref document: AU |
|
ENP | Entry into the national phase |
Ref document number: 20147029051 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2015527787 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2014145359 Country of ref document: RU Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13878530 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |